Jan 27 14:05:12 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 14:05:12 crc restorecon[4692]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:12 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 14:05:13 crc restorecon[4692]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 14:05:13 crc kubenswrapper[4729]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.824951 4729 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829944 4729 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829969 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829975 4729 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829986 4729 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829990 4729 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829993 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.829998 4729 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830002 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830006 4729 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830010 4729 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830013 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830017 4729 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830020 4729 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830024 4729 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830027 4729 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830031 4729 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830034 4729 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830038 4729 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830041 4729 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830044 4729 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830048 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830053 4729 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830057 4729 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830061 4729 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830065 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830069 4729 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830073 4729 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830076 4729 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830081 4729 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830086 4729 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830090 4729 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830094 4729 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830098 4729 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830102 4729 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830105 4729 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830110 4729 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830113 4729 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830117 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830120 4729 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830124 4729 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830127 4729 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830131 4729 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830135 4729 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830138 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830142 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830146 4729 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830149 4729 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830153 4729 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830156 4729 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830161 4729 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830166 4729 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830169 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830173 4729 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830178 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830182 4729 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830185 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830189 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830193 4729 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830197 4729 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830202 4729 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830206 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830210 4729 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830214 4729 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830218 4729 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830222 4729 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830225 4729 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830229 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830234 4729 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830238 4729 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830241 4729 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.830245 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830339 4729 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830348 4729 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830355 4729 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830361 4729 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830367 4729 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830371 4729 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830377 4729 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830383 4729 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830388 4729 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830392 4729 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830396 4729 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830401 4729 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830405 4729 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830409 4729 flags.go:64] FLAG: --cgroup-root="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830414 4729 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830419 4729 flags.go:64] FLAG: --client-ca-file="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830423 4729 flags.go:64] FLAG: --cloud-config="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830427 4729 flags.go:64] FLAG: --cloud-provider="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830431 4729 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830436 4729 flags.go:64] FLAG: --cluster-domain="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830440 4729 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830444 4729 flags.go:64] FLAG: --config-dir="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830449 4729 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830453 4729 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830458 4729 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830463 4729 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830467 4729 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830472 4729 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830476 4729 flags.go:64] FLAG: --contention-profiling="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830480 4729 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830485 4729 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830489 4729 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830493 4729 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830498 4729 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830502 4729 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830506 4729 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830511 4729 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830515 4729 flags.go:64] FLAG: --enable-server="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830519 4729 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830526 4729 flags.go:64] FLAG: --event-burst="100" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830530 4729 flags.go:64] FLAG: --event-qps="50" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830534 4729 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830539 4729 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830543 4729 flags.go:64] FLAG: --eviction-hard="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830548 4729 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830552 4729 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830556 4729 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830561 4729 flags.go:64] FLAG: --eviction-soft="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830565 4729 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830569 4729 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830573 4729 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830577 4729 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830581 4729 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830585 4729 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830589 4729 flags.go:64] FLAG: --feature-gates="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830595 4729 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830599 4729 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830603 4729 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830609 4729 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830613 4729 flags.go:64] FLAG: --healthz-port="10248" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830618 4729 flags.go:64] FLAG: --help="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830623 4729 flags.go:64] FLAG: --hostname-override="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830628 4729 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830632 4729 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830637 4729 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830642 4729 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830647 4729 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830651 4729 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830655 4729 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830659 4729 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830663 4729 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830667 4729 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830672 4729 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830676 4729 flags.go:64] FLAG: --kube-reserved="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830680 4729 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830684 4729 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830689 4729 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830693 4729 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830698 4729 flags.go:64] FLAG: --lock-file="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830703 4729 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830708 4729 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830713 4729 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830721 4729 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830726 4729 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830731 4729 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830736 4729 flags.go:64] FLAG: --logging-format="text" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830741 4729 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830746 4729 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830750 4729 flags.go:64] FLAG: --manifest-url="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830755 4729 flags.go:64] FLAG: --manifest-url-header="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830767 4729 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830773 4729 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830779 4729 flags.go:64] FLAG: --max-pods="110" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830784 4729 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830789 4729 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830794 4729 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830799 4729 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830804 4729 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830809 4729 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830814 4729 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830826 4729 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830831 4729 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830836 4729 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830842 4729 flags.go:64] FLAG: --pod-cidr="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830846 4729 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830855 4729 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830861 4729 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830866 4729 flags.go:64] FLAG: --pods-per-core="0" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830889 4729 flags.go:64] FLAG: --port="10250" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830897 4729 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830902 4729 flags.go:64] FLAG: --provider-id="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830907 4729 flags.go:64] FLAG: --qos-reserved="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830912 4729 flags.go:64] FLAG: --read-only-port="10255" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830917 4729 flags.go:64] FLAG: --register-node="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830922 4729 flags.go:64] FLAG: --register-schedulable="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830927 4729 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830935 4729 flags.go:64] FLAG: --registry-burst="10" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830940 4729 flags.go:64] FLAG: --registry-qps="5" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830945 4729 flags.go:64] FLAG: --reserved-cpus="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830949 4729 flags.go:64] FLAG: --reserved-memory="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830956 4729 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830961 4729 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830969 4729 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830974 4729 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830979 4729 flags.go:64] FLAG: --runonce="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830984 4729 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830989 4729 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830994 4729 flags.go:64] FLAG: --seccomp-default="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.830999 4729 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831004 4729 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831009 4729 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831016 4729 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831021 4729 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831025 4729 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831030 4729 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831035 4729 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831041 4729 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831046 4729 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831051 4729 flags.go:64] FLAG: --system-cgroups="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831056 4729 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831064 4729 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831069 4729 flags.go:64] FLAG: --tls-cert-file="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831074 4729 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831080 4729 flags.go:64] FLAG: --tls-min-version="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831085 4729 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831090 4729 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831095 4729 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831100 4729 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831104 4729 flags.go:64] FLAG: --v="2" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831111 4729 flags.go:64] FLAG: --version="false" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831118 4729 flags.go:64] FLAG: --vmodule="" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831123 4729 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.831129 4729 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831252 4729 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831261 4729 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831266 4729 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831272 4729 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831276 4729 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831280 4729 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831284 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831288 4729 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831293 4729 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831297 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831301 4729 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831305 4729 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831311 4729 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831314 4729 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831319 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831323 4729 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831327 4729 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831331 4729 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831335 4729 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831339 4729 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831345 4729 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831349 4729 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831353 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831357 4729 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831361 4729 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831366 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831370 4729 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831375 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831381 4729 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831386 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831392 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831397 4729 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831402 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831409 4729 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831413 4729 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831417 4729 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831423 4729 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831427 4729 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831432 4729 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831436 4729 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831441 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831445 4729 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831450 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831454 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831458 4729 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831462 4729 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831467 4729 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831472 4729 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831478 4729 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831484 4729 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831490 4729 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831496 4729 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831502 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831507 4729 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831512 4729 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831516 4729 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831521 4729 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831526 4729 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831530 4729 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831534 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831539 4729 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831543 4729 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831547 4729 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831557 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831562 4729 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831568 4729 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831572 4729 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831577 4729 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831581 4729 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831585 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.831589 4729 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.832376 4729 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.842355 4729 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.842393 4729 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842470 4729 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842478 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842483 4729 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842486 4729 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842490 4729 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842494 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842498 4729 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842502 4729 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842507 4729 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842511 4729 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842515 4729 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842520 4729 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842524 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842528 4729 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842531 4729 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842535 4729 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842539 4729 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842543 4729 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842546 4729 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842550 4729 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842553 4729 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842557 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842560 4729 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842564 4729 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842567 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842571 4729 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842574 4729 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842579 4729 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842583 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842588 4729 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842595 4729 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842599 4729 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842604 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842609 4729 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842613 4729 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842617 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842623 4729 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842627 4729 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842631 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842635 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842639 4729 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842646 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842650 4729 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842654 4729 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842658 4729 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842662 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842665 4729 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842669 4729 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842672 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842676 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842680 4729 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842684 4729 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842689 4729 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842694 4729 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842699 4729 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842703 4729 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842707 4729 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842711 4729 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842715 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842720 4729 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842723 4729 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842727 4729 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842730 4729 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842734 4729 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842737 4729 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842741 4729 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842745 4729 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842749 4729 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842752 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842756 4729 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842759 4729 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.842765 4729 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842904 4729 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842912 4729 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842916 4729 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842920 4729 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842924 4729 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842929 4729 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842934 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842937 4729 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842941 4729 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842944 4729 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842948 4729 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842952 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842955 4729 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842959 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842962 4729 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842966 4729 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842969 4729 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842973 4729 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842976 4729 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842980 4729 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842983 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842987 4729 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842990 4729 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842994 4729 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.842997 4729 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843002 4729 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843006 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843009 4729 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843012 4729 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843016 4729 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843019 4729 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843023 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843027 4729 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843030 4729 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843034 4729 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843038 4729 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843041 4729 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843047 4729 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843051 4729 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843054 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843058 4729 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843061 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843065 4729 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843068 4729 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843072 4729 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843075 4729 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843079 4729 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843082 4729 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843085 4729 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843089 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843093 4729 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843098 4729 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843102 4729 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843106 4729 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843110 4729 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843113 4729 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843117 4729 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843121 4729 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843124 4729 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843128 4729 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843131 4729 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843135 4729 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843138 4729 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843142 4729 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843146 4729 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843151 4729 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843156 4729 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843160 4729 feature_gate.go:330] unrecognized feature gate: Example Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843164 4729 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843167 4729 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.843186 4729 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.843193 4729 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.843364 4729 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.847442 4729 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.847538 4729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.849225 4729 server.go:997] "Starting client certificate rotation" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.849262 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.851246 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-27 19:45:27.308717588 +0000 UTC Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.851367 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.884657 4729 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:05:13 crc kubenswrapper[4729]: E0127 14:05:13.888171 4729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.890225 4729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.909449 4729 log.go:25] "Validated CRI v1 runtime API" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.941753 4729 log.go:25] "Validated CRI v1 image API" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.945210 4729 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.950527 4729 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-14-00-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.950578 4729 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.976597 4729 manager.go:217] Machine: {Timestamp:2026-01-27 14:05:13.973500355 +0000 UTC m=+0.557691409 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:854545c8-b5ae-49d8-92cd-d4d0ecee101e BootID:e56ec573-67b5-4644-a470-a69acd2c4e85 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:74:b8:96 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:74:b8:96 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3f:78:bf Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:75:7a:b5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:28:8c:8f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6d:12:fd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ce:bc:a4:71:52:d2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:f9:45:c2:6f:fa Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.976935 4729 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.977097 4729 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.979062 4729 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.979318 4729 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.979375 4729 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.980319 4729 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.980351 4729 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.980849 4729 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.980900 4729 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.981087 4729 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.981462 4729 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.984980 4729 kubelet.go:418] "Attempting to sync node with API server" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.985012 4729 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.985032 4729 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.985050 4729 kubelet.go:324] "Adding apiserver pod source" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.985064 4729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.989066 4729 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.990392 4729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.991743 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:13 crc kubenswrapper[4729]: E0127 14:05:13.991843 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:13 crc kubenswrapper[4729]: W0127 14:05:13.991746 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:13 crc kubenswrapper[4729]: E0127 14:05:13.991910 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.993742 4729 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995090 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995123 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995135 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995146 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995163 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995173 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995183 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995200 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995212 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995223 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995236 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.995246 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.996257 4729 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.996867 4729 server.go:1280] "Started kubelet" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.997920 4729 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.997913 4729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.998098 4729 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.998569 4729 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.999634 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.999670 4729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 14:05:13 crc kubenswrapper[4729]: I0127 14:05:13.999698 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:25:26.849957725 +0000 UTC Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:13.999772 4729 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.000421 4729 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:13.999949 4729 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:13.999782 4729 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.000802 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="200ms" Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.000933 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.001034 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.006738 4729 server.go:460] "Adding debug handlers to kubelet server" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.007392 4729 factory.go:55] Registering systemd factory Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.006526 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.171:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9b8875c4370b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:05:13.996834571 +0000 UTC m=+0.581025575,LastTimestamp:2026-01-27 14:05:13.996834571 +0000 UTC m=+0.581025575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.008107 4729 factory.go:221] Registration of the systemd container factory successfully Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.008674 4729 factory.go:153] Registering CRI-O factory Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.008813 4729 factory.go:221] Registration of the crio container factory successfully Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.009164 4729 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.009296 4729 factory.go:103] Registering Raw factory Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.009381 4729 manager.go:1196] Started watching for new ooms in manager Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.012433 4729 manager.go:319] Starting recovery of all containers Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.015907 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.015968 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.015986 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016001 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016015 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016030 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016043 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016057 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016072 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016086 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016098 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016111 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016126 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016143 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016158 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016171 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016184 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016197 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016209 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016222 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016234 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016246 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016259 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016272 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016288 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016302 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016318 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016332 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016387 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016411 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016432 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016499 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016513 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016528 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016543 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016557 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016571 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016585 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016598 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016613 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016627 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016642 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016656 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016671 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016685 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016698 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016712 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016726 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016740 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016789 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016804 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016820 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016838 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016855 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016870 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016927 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016974 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.016990 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017004 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017019 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017032 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017050 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017064 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017078 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017091 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017105 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017119 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017131 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017146 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017159 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017172 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017185 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017199 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017212 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017225 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017239 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017254 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017268 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017282 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017297 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017310 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017325 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017339 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017352 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017367 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017381 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017397 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017411 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017427 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017440 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017454 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017467 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017480 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017494 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017508 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017523 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017537 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017552 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017565 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017579 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017593 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017606 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017620 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017634 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017660 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017675 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017693 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017707 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017721 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017735 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017754 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017768 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017782 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017797 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017811 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017824 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017839 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017852 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017865 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017904 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017921 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017936 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017952 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017966 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017979 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.017992 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018007 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018021 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018036 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018083 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018099 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018112 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018126 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018140 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018153 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018166 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018179 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018192 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018205 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018218 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018230 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018243 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018258 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018272 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018285 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018299 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018315 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018328 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018342 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018357 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018370 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018385 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018399 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018412 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018426 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018439 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018452 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018465 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018481 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018494 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018509 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018525 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018542 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018556 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018571 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018585 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018598 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018611 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018625 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018639 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018653 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018667 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018681 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018695 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018710 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018725 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018746 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018760 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018773 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018788 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018804 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018816 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018829 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018847 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018859 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018891 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018905 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018918 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018930 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.018945 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.020989 4729 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021028 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021048 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021066 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021083 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021101 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021118 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021134 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021152 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021169 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021186 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021205 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021223 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021240 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021259 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021277 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021297 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021314 4729 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021330 4729 reconstruct.go:97] "Volume reconstruction finished" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.021343 4729 reconciler.go:26] "Reconciler: start to sync state" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.030739 4729 manager.go:324] Recovery completed Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.042927 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.045251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.045300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.045312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.047933 4729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.048096 4729 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.048127 4729 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.048154 4729 state_mem.go:36] "Initialized new in-memory state store" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.049529 4729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.049586 4729 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.049613 4729 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.049696 4729 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.051499 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.051663 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.069239 4729 policy_none.go:49] "None policy: Start" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.070294 4729 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.070331 4729 state_mem.go:35] "Initializing new in-memory state store" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.101289 4729 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.143967 4729 manager.go:334] "Starting Device Plugin manager" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.144039 4729 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.144057 4729 server.go:79] "Starting device plugin registration server" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.144583 4729 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.144620 4729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.144816 4729 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.145115 4729 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.145134 4729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.150631 4729 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.150725 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.158840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.158898 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.158911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.159078 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.159470 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.159568 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162331 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162564 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.162604 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164420 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164554 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164649 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.164754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165910 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.165999 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.166022 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.166073 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167077 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167227 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167267 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.167288 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.168003 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.168023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.168032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.169761 4729 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.201787 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="400ms" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223087 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223122 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223159 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223175 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223236 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223323 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223347 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223386 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223413 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223429 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223463 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223478 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223510 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.223543 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.244973 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.245899 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.245927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.245937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.245955 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.246312 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.171:6443: connect: connection refused" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325025 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325160 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325212 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325242 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325264 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325285 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325306 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325343 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325359 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325383 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325400 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325413 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325455 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325492 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325437 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325548 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325560 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325561 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325626 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325628 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325705 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325720 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325755 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325755 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.325802 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.446476 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.448612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.448683 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.448699 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.448734 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.449351 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.171:6443: connect: connection refused" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.499226 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.506343 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.520888 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.536708 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.540774 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.558066 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4c55da973de56343dd28fd1b2d3ee3b35f5c06f51a61a9d640d7d2116c2d03f5 WatchSource:0}: Error finding container 4c55da973de56343dd28fd1b2d3ee3b35f5c06f51a61a9d640d7d2116c2d03f5: Status 404 returned error can't find the container with id 4c55da973de56343dd28fd1b2d3ee3b35f5c06f51a61a9d640d7d2116c2d03f5 Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.558645 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.171:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e9b8875c4370b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:05:13.996834571 +0000 UTC m=+0.581025575,LastTimestamp:2026-01-27 14:05:13.996834571 +0000 UTC m=+0.581025575,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.569247 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-aca070811234206cf652d278d5b42fcf8dc381cf9cac4253c7cd7443b95092d8 WatchSource:0}: Error finding container aca070811234206cf652d278d5b42fcf8dc381cf9cac4253c7cd7443b95092d8: Status 404 returned error can't find the container with id aca070811234206cf652d278d5b42fcf8dc381cf9cac4253c7cd7443b95092d8 Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.572674 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ec3a0abd73ac79907da1a3964a4952ca26f98c67ca88ed2f8259a49458f107bc WatchSource:0}: Error finding container ec3a0abd73ac79907da1a3964a4952ca26f98c67ca88ed2f8259a49458f107bc: Status 404 returned error can't find the container with id ec3a0abd73ac79907da1a3964a4952ca26f98c67ca88ed2f8259a49458f107bc Jan 27 14:05:14 crc kubenswrapper[4729]: W0127 14:05:14.578533 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c93e75665eb3022ab7c3a463dc45f1b5f0da9a99c291e1bcf383ba3f5ae29980 WatchSource:0}: Error finding container c93e75665eb3022ab7c3a463dc45f1b5f0da9a99c291e1bcf383ba3f5ae29980: Status 404 returned error can't find the container with id c93e75665eb3022ab7c3a463dc45f1b5f0da9a99c291e1bcf383ba3f5ae29980 Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.603223 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="800ms" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.850061 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.851278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.851328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.851346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.851374 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: E0127 14:05:14.851865 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.171:6443: connect: connection refused" node="crc" Jan 27 14:05:14 crc kubenswrapper[4729]: I0127 14:05:14.999360 4729 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.000544 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:22:52.678126285 +0000 UTC Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.053767 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec3a0abd73ac79907da1a3964a4952ca26f98c67ca88ed2f8259a49458f107bc"} Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.054688 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aca070811234206cf652d278d5b42fcf8dc381cf9cac4253c7cd7443b95092d8"} Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.055802 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e5810f38cdeb29a76cf5e9544f589ccd164860eabeae42c3004c3f677c2f814"} Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.056975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4c55da973de56343dd28fd1b2d3ee3b35f5c06f51a61a9d640d7d2116c2d03f5"} Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.057784 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c93e75665eb3022ab7c3a463dc45f1b5f0da9a99c291e1bcf383ba3f5ae29980"} Jan 27 14:05:15 crc kubenswrapper[4729]: W0127 14:05:15.068618 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.068693 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:15 crc kubenswrapper[4729]: W0127 14:05:15.084611 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.084697 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:15 crc kubenswrapper[4729]: W0127 14:05:15.366417 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.366835 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.404890 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="1.6s" Jan 27 14:05:15 crc kubenswrapper[4729]: W0127 14:05:15.541099 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.541213 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.652498 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.653954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.653992 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.654007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.654030 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.654423 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.171:6443: connect: connection refused" node="crc" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.936495 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:05:15 crc kubenswrapper[4729]: E0127 14:05:15.937548 4729 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:15 crc kubenswrapper[4729]: I0127 14:05:15.999788 4729 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.000848 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:45:14.767655406 +0000 UTC Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.063657 4729 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74" exitCode=0 Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.063751 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.063812 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.064915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.064956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.064969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.070470 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.070634 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.070740 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.070790 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.070805 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.072142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.072300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.072406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.073540 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f" exitCode=0 Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.073584 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.073739 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.075186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.075228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.075239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.076302 4729 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050" exitCode=0 Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.076482 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.076378 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.076944 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.077685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.077717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.077734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.077975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.078012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.078022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.081610 4729 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957" exitCode=0 Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.081661 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957"} Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.081762 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.082813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.082845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.082858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:16 crc kubenswrapper[4729]: W0127 14:05:16.703331 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:16 crc kubenswrapper[4729]: E0127 14:05:16.703415 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:16 crc kubenswrapper[4729]: I0127 14:05:16.999135 4729 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.001338 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:59:06.159911129 +0000 UTC Jan 27 14:05:17 crc kubenswrapper[4729]: E0127 14:05:17.006445 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="3.2s" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.087587 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.087643 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.087661 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.087672 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.090177 4729 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88" exitCode=0 Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.090222 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.090317 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.091105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.091127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.091137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.093174 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.093375 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.094318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.094343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.094351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.103090 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.103141 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.103166 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.103182 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16"} Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.103147 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.104692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.104737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.104747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.106171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.106200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.106209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.123761 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.254784 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.256036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.256081 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.256095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:17 crc kubenswrapper[4729]: I0127 14:05:17.256123 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:17 crc kubenswrapper[4729]: E0127 14:05:17.256618 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.171:6443: connect: connection refused" node="crc" Jan 27 14:05:17 crc kubenswrapper[4729]: W0127 14:05:17.433038 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:17 crc kubenswrapper[4729]: E0127 14:05:17.433108 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:17 crc kubenswrapper[4729]: W0127 14:05:17.530242 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.171:6443: connect: connection refused Jan 27 14:05:17 crc kubenswrapper[4729]: E0127 14:05:17.530383 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.171:6443: connect: connection refused" logger="UnhandledError" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.001949 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:20:18.254928886 +0000 UTC Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.108312 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938"} Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.108491 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.109450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.109479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.109489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.110728 4729 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e" exitCode=0 Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.110825 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.110860 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.111473 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e"} Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.111537 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.111544 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.111566 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112058 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.112753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:18 crc kubenswrapper[4729]: I0127 14:05:18.972998 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.002983 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:34:56.219193845 +0000 UTC Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120757 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5"} Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120810 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c"} Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120826 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54"} Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120839 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8"} Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120843 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120851 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69"} Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120847 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120897 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.120967 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122514 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.122549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.239136 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.483102 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.483331 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.484411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.484445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:19 crc kubenswrapper[4729]: I0127 14:05:19.484462 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.003277 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:43:27.133516519 +0000 UTC Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.123060 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.123111 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124114 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124174 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.124250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.160378 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.457456 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.459506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.459547 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.459558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.459588 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.873802 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:20 crc kubenswrapper[4729]: I0127 14:05:20.910037 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.004013 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:24:16.053417103 +0000 UTC Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.125229 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.125286 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126426 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126438 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126449 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:21 crc kubenswrapper[4729]: I0127 14:05:21.126471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.004654 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:04:16.863384774 +0000 UTC Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.128377 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.129737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.129792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.129807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.484201 4729 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.484293 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.724895 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.725070 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.726373 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.726424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:22 crc kubenswrapper[4729]: I0127 14:05:22.726444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:23 crc kubenswrapper[4729]: I0127 14:05:23.004888 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:34:45.076234714 +0000 UTC Jan 27 14:05:24 crc kubenswrapper[4729]: I0127 14:05:24.005935 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:01:30.459450928 +0000 UTC Jan 27 14:05:24 crc kubenswrapper[4729]: E0127 14:05:24.169947 4729 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.006706 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:02:17.266103666 +0000 UTC Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.836579 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.836807 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.838365 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.838423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.838436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:25 crc kubenswrapper[4729]: I0127 14:05:25.849623 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.007036 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:50:40.717823866 +0000 UTC Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.138082 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.139071 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.139108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.139117 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.142624 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.761389 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.761615 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.762770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.762820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:26 crc kubenswrapper[4729]: I0127 14:05:26.762831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:27 crc kubenswrapper[4729]: I0127 14:05:27.007963 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:56:28.341315164 +0000 UTC Jan 27 14:05:27 crc kubenswrapper[4729]: I0127 14:05:27.140662 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:27 crc kubenswrapper[4729]: I0127 14:05:27.141986 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:27 crc kubenswrapper[4729]: I0127 14:05:27.142038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:27 crc kubenswrapper[4729]: I0127 14:05:27.142050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.000509 4729 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.008707 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:05:02.149813651 +0000 UTC Jan 27 14:05:28 crc kubenswrapper[4729]: W0127 14:05:28.100408 4729 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.100499 4729 trace.go:236] Trace[155042876]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:05:18.098) (total time: 10002ms): Jan 27 14:05:28 crc kubenswrapper[4729]: Trace[155042876]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:05:28.100) Jan 27 14:05:28 crc kubenswrapper[4729]: Trace[155042876]: [10.002033111s] [10.002033111s] END Jan 27 14:05:28 crc kubenswrapper[4729]: E0127 14:05:28.100522 4729 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.134792 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41288->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.134901 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41288->192.168.126.11:17697: read: connection reset by peer" Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.746883 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.746983 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.760389 4729 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 14:05:28 crc kubenswrapper[4729]: I0127 14:05:28.760457 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.009629 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:51:00.540149989 +0000 UTC Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.145816 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.147296 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938" exitCode=255 Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.147338 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938"} Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.147483 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.148318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.148344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.148355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:29 crc kubenswrapper[4729]: I0127 14:05:29.149059 4729 scope.go:117] "RemoveContainer" containerID="f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.010831 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:40:22.456265568 +0000 UTC Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.151707 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.153279 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c"} Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.153418 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.154176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.154255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.154269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.879366 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:30 crc kubenswrapper[4729]: I0127 14:05:30.913767 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.011757 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:40:42.716312714 +0000 UTC Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.155818 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.156172 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.157456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.157487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:31 crc kubenswrapper[4729]: I0127 14:05:31.157501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.012343 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 20:20:37.444179366 +0000 UTC Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.157976 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.158739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.158771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.158782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.484226 4729 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:05:32 crc kubenswrapper[4729]: I0127 14:05:32.484295 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.012775 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:25:02.481311849 +0000 UTC Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.029970 4729 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:33 crc kubenswrapper[4729]: E0127 14:05:33.730196 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.732899 4729 trace.go:236] Trace[989917953]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:05:22.934) (total time: 10797ms): Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[989917953]: ---"Objects listed" error: 10797ms (14:05:33.732) Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[989917953]: [10.797973959s] [10.797973959s] END Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.732929 4729 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:33 crc kubenswrapper[4729]: E0127 14:05:33.733250 4729 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.733285 4729 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.733437 4729 trace.go:236] Trace[205394297]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:05:21.198) (total time: 12534ms): Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[205394297]: ---"Objects listed" error: 12534ms (14:05:33.733) Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[205394297]: [12.53450516s] [12.53450516s] END Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.733457 4729 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.734841 4729 trace.go:236] Trace[662305161]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Jan-2026 14:05:20.211) (total time: 13522ms): Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[662305161]: ---"Objects listed" error: 13522ms (14:05:33.734) Jan 27 14:05:33 crc kubenswrapper[4729]: Trace[662305161]: [13.522780592s] [13.522780592s] END Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.734902 4729 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.748498 4729 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.764037 4729 csr.go:261] certificate signing request csr-7cvvp is approved, waiting to be issued Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.772091 4729 csr.go:257] certificate signing request csr-7cvvp is issued Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.849281 4729 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 14:05:33 crc kubenswrapper[4729]: W0127 14:05:33.849713 4729 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 14:05:33 crc kubenswrapper[4729]: W0127 14:05:33.849741 4729 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 14:05:33 crc kubenswrapper[4729]: W0127 14:05:33.849714 4729 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 14:05:33 crc kubenswrapper[4729]: W0127 14:05:33.849757 4729 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 27 14:05:33 crc kubenswrapper[4729]: E0127 14:05:33.849686 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.129.56.171:38958->38.129.56.171:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e9b8897ee2fbf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:05:14.570010559 +0000 UTC m=+1.154201573,LastTimestamp:2026-01-27 14:05:14.570010559 +0000 UTC m=+1.154201573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:05:33 crc kubenswrapper[4729]: I0127 14:05:33.997295 4729 apiserver.go:52] "Watching apiserver" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.006655 4729 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.006999 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8kktz","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.007382 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.007510 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.007612 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.007962 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.007989 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.007984 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.008018 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.008049 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.008233 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.008457 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.011784 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.011784 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.011837 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.012706 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.012899 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:11:58.044623234 +0000 UTC Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.014296 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.014335 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.014403 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.014338 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.014542 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.016162 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.016360 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.017538 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.042705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.069356 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.101352 4729 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.135953 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.135997 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136014 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136032 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136048 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136062 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136077 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136091 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136106 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136123 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136136 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136151 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136166 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136181 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136195 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136211 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136226 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136249 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136265 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136279 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136307 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136321 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136335 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136348 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136363 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136377 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136394 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136409 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136426 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136442 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136460 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136474 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136491 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136506 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136523 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136538 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136555 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136569 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136584 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136600 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136617 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136633 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136648 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136666 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136681 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136698 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136713 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136728 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136746 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136762 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136777 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136791 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136807 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136822 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136836 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136851 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136868 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136897 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136912 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136926 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136939 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136953 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136968 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136983 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.136997 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137012 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137026 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137041 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137058 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137073 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137089 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137104 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137120 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137135 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137149 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137172 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137187 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137217 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137232 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137248 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137264 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137278 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137293 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137308 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137324 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137339 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137354 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137372 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137401 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137415 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137432 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137449 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137464 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137443 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137480 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137663 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137699 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137730 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137751 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137803 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137820 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137836 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137852 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137867 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137899 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137915 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137931 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137948 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137964 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137982 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.137998 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138016 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138031 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138037 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138046 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138096 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138121 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138145 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138175 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138179 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138227 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138254 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138281 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138309 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138333 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138390 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138411 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138419 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138446 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138471 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138495 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138516 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138541 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138553 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138568 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138596 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138623 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138649 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138673 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138700 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138724 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138749 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138782 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138809 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138836 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138860 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138903 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138926 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138951 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138973 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.138995 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139020 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139044 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139069 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139093 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139115 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139138 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139165 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139187 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139192 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139233 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139251 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139269 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139287 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139307 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139326 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139346 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139369 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139390 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139594 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139836 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139865 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139932 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139959 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139981 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140002 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140025 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140044 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140064 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140083 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140123 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140140 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140156 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140174 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140193 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140211 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140229 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140249 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140266 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140286 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140307 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140392 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140413 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140430 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140449 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140468 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140486 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140502 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140522 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140559 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-hosts-file\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140581 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140605 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140624 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140666 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140685 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140705 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140743 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140763 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140781 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140802 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkc6\" (UniqueName: \"kubernetes.io/projected/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-kube-api-access-6jkc6\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140822 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140840 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140861 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140919 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140932 4729 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140942 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140952 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140961 4729 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140970 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140980 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145686 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.146942 4729 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160921 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.162289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.168186 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139893 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139856 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.139970 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140004 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140012 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140010 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140033 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140140 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140165 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140188 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140301 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140354 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140358 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140384 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140545 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140599 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140609 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140672 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140775 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140768 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140793 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140937 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140980 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.140994 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141031 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141081 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141185 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141239 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141416 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141423 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141429 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141430 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141449 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141512 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141647 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141671 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141691 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141845 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141863 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141896 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.141946 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.142190 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.142689 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.143099 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.143398 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.143780 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144050 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144185 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144277 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144340 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144346 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144475 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.144604 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145155 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145344 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145479 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145540 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145564 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145640 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145733 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145814 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145827 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145950 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.145955 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.146046 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.146115 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.147845 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.148088 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.148659 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.148828 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.149010 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.149276 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.149633 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.149906 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.150125 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.150311 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.150725 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.153511 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.153551 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.154185 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.154444 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.154696 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.156193 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.156453 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.157396 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.157735 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.158013 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.158276 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.158522 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.159692 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160002 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160039 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160067 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160158 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160169 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160283 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160633 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.160655 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.190648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.191003 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.191009 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.191037 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.191404 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.191414 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161022 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.160386 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161198 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.161294 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:05:34.661271691 +0000 UTC m=+21.245462755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.191750 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:34.691717596 +0000 UTC m=+21.275908610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.193026 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.193084 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.193566 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.192799 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.199326 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.199362 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.199376 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.199446 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:34.69941723 +0000 UTC m=+21.283608234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.199837 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.201990 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.202133 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.202439 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.202787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.202815 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.203269 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.203482 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161321 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161342 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161460 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161470 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161566 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161621 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161829 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.203800 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161836 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.161960 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.162002 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.162265 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.162258 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.167016 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.167402 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.167580 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.167843 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.168234 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.168980 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.169186 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.169356 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.169524 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.169686 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.170532 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.172675 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.172923 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173081 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173141 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173202 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173336 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173724 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.173813 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174048 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174585 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174658 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174674 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174699 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174925 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.174959 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175025 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175085 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.204544 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175279 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175523 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175857 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.175931 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.176319 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.176370 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.176688 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.176734 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.176953 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.177213 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.179148 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.180009 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.180377 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.188210 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.205294 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.205362 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.205396 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.205415 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.205490 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:34.705458993 +0000 UTC m=+21.289649997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.205607 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.160814 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.205730 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:34.70569804 +0000 UTC m=+21.289889254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.207909 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.208645 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.208813 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.209291 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.209588 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.209898 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.209991 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.210417 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.210619 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.210682 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.212482 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.217604 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.220501 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.226189 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.239230 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.241784 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.241838 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-hosts-file\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.241895 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.241927 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.241939 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkc6\" (UniqueName: \"kubernetes.io/projected/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-kube-api-access-6jkc6\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242035 4729 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242049 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242064 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242074 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-hosts-file\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242077 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242109 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242119 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242131 4729 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242141 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242151 4729 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242161 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242170 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242179 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242188 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242198 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242209 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242218 4729 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242227 4729 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242236 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242245 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242254 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242262 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242272 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242281 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242290 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242299 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242308 4729 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242317 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242325 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242334 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242343 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242352 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242360 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242369 4729 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242378 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242388 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242397 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242405 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242414 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242422 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242431 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242439 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242448 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242457 4729 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242465 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242474 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242477 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242483 4729 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242543 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242553 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242567 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242593 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242602 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242611 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242620 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242628 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242636 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242643 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242651 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242659 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242667 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242676 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242685 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242694 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242715 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242724 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242682 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242733 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242891 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242904 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242915 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242924 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242934 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242944 4729 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242954 4729 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242963 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242972 4729 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242980 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242988 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.242997 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243006 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243014 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243023 4729 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243032 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243040 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243048 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243057 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243065 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243073 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243082 4729 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243090 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243098 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243106 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243115 4729 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243124 4729 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243132 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243141 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243150 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243158 4729 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243169 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243178 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243187 4729 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243196 4729 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243205 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243215 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243224 4729 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243234 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243242 4729 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243251 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243260 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243269 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243278 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243286 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243295 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243304 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243312 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243321 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243330 4729 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243338 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243346 4729 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243356 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243364 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243373 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243381 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243410 4729 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243421 4729 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243429 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243438 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243447 4729 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243455 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243463 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243473 4729 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243481 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243492 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243501 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243509 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243517 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243525 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243533 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243542 4729 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243551 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243560 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243568 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243577 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243589 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243597 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243608 4729 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243618 4729 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243629 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243640 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243650 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243660 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243671 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243681 4729 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243711 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243720 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243728 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243735 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243743 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243751 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243763 4729 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243771 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243779 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243788 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243796 4729 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243804 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243812 4729 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243820 4729 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243828 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243836 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243845 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243852 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243861 4729 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243868 4729 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243889 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243898 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243907 4729 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243915 4729 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243923 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243931 4729 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243940 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243949 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243957 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243966 4729 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243974 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243983 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243991 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.243999 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.244008 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.244018 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.255534 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.257615 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkc6\" (UniqueName: \"kubernetes.io/projected/fea7fb0a-4048-41bf-ac80-6a80a1f5fb92-kube-api-access-6jkc6\") pod \"node-resolver-8kktz\" (UID: \"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\") " pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.262901 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.270493 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.280175 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.290845 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.298238 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.307975 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.316245 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.322384 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.324866 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 14:05:34 crc kubenswrapper[4729]: W0127 14:05:34.335945 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1f8b0221bb92fbc28db1be3ec4c70eb89bac1af4e2cbea77cacccde2cf535ea5 WatchSource:0}: Error finding container 1f8b0221bb92fbc28db1be3ec4c70eb89bac1af4e2cbea77cacccde2cf535ea5: Status 404 returned error can't find the container with id 1f8b0221bb92fbc28db1be3ec4c70eb89bac1af4e2cbea77cacccde2cf535ea5 Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.336596 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.343911 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 14:05:34 crc kubenswrapper[4729]: W0127 14:05:34.346412 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-eb3a42121052a3093fd6f935f3a37dae3f009f3580e8de6d99c96034d6de7ba3 WatchSource:0}: Error finding container eb3a42121052a3093fd6f935f3a37dae3f009f3580e8de6d99c96034d6de7ba3: Status 404 returned error can't find the container with id eb3a42121052a3093fd6f935f3a37dae3f009f3580e8de6d99c96034d6de7ba3 Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.351555 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8kktz" Jan 27 14:05:34 crc kubenswrapper[4729]: W0127 14:05:34.365489 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-8a1cddc73568238541ea2ace9601213b869024ebde24786f78b8b27e49a21f9b WatchSource:0}: Error finding container 8a1cddc73568238541ea2ace9601213b869024ebde24786f78b8b27e49a21f9b: Status 404 returned error can't find the container with id 8a1cddc73568238541ea2ace9601213b869024ebde24786f78b8b27e49a21f9b Jan 27 14:05:34 crc kubenswrapper[4729]: W0127 14:05:34.370840 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea7fb0a_4048_41bf_ac80_6a80a1f5fb92.slice/crio-c5067f2fcf3a305d4ca10483bc66eb752609166ce196d5bf6ce75ff0ef18b1bb WatchSource:0}: Error finding container c5067f2fcf3a305d4ca10483bc66eb752609166ce196d5bf6ce75ff0ef18b1bb: Status 404 returned error can't find the container with id c5067f2fcf3a305d4ca10483bc66eb752609166ce196d5bf6ce75ff0ef18b1bb Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.748350 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.748474 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.748521 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.748555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748566 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.748576 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748651 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748647 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:05:35.748598928 +0000 UTC m=+22.332789932 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748704 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:35.74868939 +0000 UTC m=+22.332880434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748704 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748748 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748724 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:35.748714751 +0000 UTC m=+22.332905845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748664 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748762 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748811 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748823 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:35.748803344 +0000 UTC m=+22.332994388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748824 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: E0127 14:05:34.748860 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:35.748853165 +0000 UTC m=+22.333044239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.773334 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 14:00:33 +0000 UTC, rotation deadline is 2026-11-27 22:19:49.99729803 +0000 UTC Jan 27 14:05:34 crc kubenswrapper[4729]: I0127 14:05:34.773420 4729 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7304h14m15.223885388s for next certificate rotation Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.013009 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:36:29.030521511 +0000 UTC Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.203022 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.203062 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1f8b0221bb92fbc28db1be3ec4c70eb89bac1af4e2cbea77cacccde2cf535ea5"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.204932 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.205322 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.206853 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" exitCode=255 Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.206909 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.206935 4729 scope.go:117] "RemoveContainer" containerID="f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.208962 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.208988 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.208997 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8a1cddc73568238541ea2ace9601213b869024ebde24786f78b8b27e49a21f9b"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.210337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"eb3a42121052a3093fd6f935f3a37dae3f009f3580e8de6d99c96034d6de7ba3"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.211593 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8kktz" event={"ID":"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92","Type":"ContainerStarted","Data":"90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.211620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8kktz" event={"ID":"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92","Type":"ContainerStarted","Data":"c5067f2fcf3a305d4ca10483bc66eb752609166ce196d5bf6ce75ff0ef18b1bb"} Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.216271 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.228091 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.240032 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.251937 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.264798 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.276379 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.291498 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.301041 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.301055 4729 scope.go:117] "RemoveContainer" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.301363 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.317654 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.360411 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-ld6q8"] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.361136 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.365156 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.365590 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.365640 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.365699 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.366088 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.380014 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-khqcl"] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.380541 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.384733 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.384937 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.386353 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.388371 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.388571 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.388694 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.391061 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hgr4r"] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.391734 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.393570 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.394148 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.426712 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.438820 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454193 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454299 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-socket-dir-parent\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454339 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-kubelet\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454400 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwgk\" (UniqueName: \"kubernetes.io/projected/c96a4b30-dced-4bf8-8f46-348c1b8972b3-kube-api-access-2fwgk\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454424 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8919c7c3-b36c-4bf1-8aed-355b818721a4-rootfs\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454488 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cnibin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-conf-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454528 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8919c7c3-b36c-4bf1-8aed-355b818721a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454548 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-system-cni-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454572 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454624 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-os-release\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454640 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-netns\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454684 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8919c7c3-b36c-4bf1-8aed-355b818721a4-proxy-tls\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-os-release\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454746 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454763 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-system-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454802 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-hostroot\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454825 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-cnibin\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454838 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cni-binary-copy\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-k8s-cni-cncf-io\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454869 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-bin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454904 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-multus\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454917 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-daemon-config\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.454963 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-binary-copy\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.455058 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-etc-kubernetes\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.455102 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xsp\" (UniqueName: \"kubernetes.io/projected/08165218-cb0f-4830-a709-1ebad64bb005-kube-api-access-65xsp\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.455124 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-multus-certs\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.455138 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qbl\" (UniqueName: \"kubernetes.io/projected/8919c7c3-b36c-4bf1-8aed-355b818721a4-kube-api-access-q5qbl\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.472747 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.486165 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.501662 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.521167 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.536891 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:28Z\\\",\\\"message\\\":\\\"W0127 14:05:17.327320 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 14:05:17.327599 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769522717 cert, and key in /tmp/serving-cert-3115967305/serving-signer.crt, /tmp/serving-cert-3115967305/serving-signer.key\\\\nI0127 14:05:17.606084 1 observer_polling.go:159] Starting file observer\\\\nW0127 14:05:17.608618 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:05:17.608736 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:17.609370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115967305/tls.crt::/tmp/serving-cert-3115967305/tls.key\\\\\\\"\\\\nF0127 14:05:28.129978 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.550950 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555438 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-etc-kubernetes\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555465 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xsp\" (UniqueName: \"kubernetes.io/projected/08165218-cb0f-4830-a709-1ebad64bb005-kube-api-access-65xsp\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555482 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-multus-certs\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555498 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qbl\" (UniqueName: \"kubernetes.io/projected/8919c7c3-b36c-4bf1-8aed-355b818721a4-kube-api-access-q5qbl\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555513 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-socket-dir-parent\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555513 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-etc-kubernetes\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555527 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-kubelet\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555540 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwgk\" (UniqueName: \"kubernetes.io/projected/c96a4b30-dced-4bf8-8f46-348c1b8972b3-kube-api-access-2fwgk\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555557 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8919c7c3-b36c-4bf1-8aed-355b818721a4-rootfs\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555574 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cnibin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555590 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-conf-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555628 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555653 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8919c7c3-b36c-4bf1-8aed-355b818721a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555688 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-system-cni-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555707 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-os-release\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555720 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-netns\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555754 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8919c7c3-b36c-4bf1-8aed-355b818721a4-proxy-tls\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555768 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-system-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555781 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-hostroot\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-os-release\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555808 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555840 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-cnibin\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555842 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-multus-certs\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555853 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cni-binary-copy\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555870 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-system-cni-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555867 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-k8s-cni-cncf-io\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555913 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-bin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555929 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-multus\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555943 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-daemon-config\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555960 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-binary-copy\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556081 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-os-release\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556118 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-os-release\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556109 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-hostroot\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556097 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-conf-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556184 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-socket-dir-parent\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556188 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-netns\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556269 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-kubelet\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556594 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-binary-copy\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556600 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/08165218-cb0f-4830-a709-1ebad64bb005-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.555914 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-run-k8s-cni-cncf-io\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556645 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cnibin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556631 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-cnibin\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556710 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-system-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556712 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-cni-dir\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556723 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8919c7c3-b36c-4bf1-8aed-355b818721a4-rootfs\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556741 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-bin\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556805 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c96a4b30-dced-4bf8-8f46-348c1b8972b3-host-var-lib-cni-multus\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.556939 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08165218-cb0f-4830-a709-1ebad64bb005-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.557234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-cni-binary-copy\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.557575 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8919c7c3-b36c-4bf1-8aed-355b818721a4-mcd-auth-proxy-config\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.557601 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c96a4b30-dced-4bf8-8f46-348c1b8972b3-multus-daemon-config\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.560759 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8919c7c3-b36c-4bf1-8aed-355b818721a4-proxy-tls\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.567039 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.572644 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qbl\" (UniqueName: \"kubernetes.io/projected/8919c7c3-b36c-4bf1-8aed-355b818721a4-kube-api-access-q5qbl\") pod \"machine-config-daemon-khqcl\" (UID: \"8919c7c3-b36c-4bf1-8aed-355b818721a4\") " pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.579825 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xsp\" (UniqueName: \"kubernetes.io/projected/08165218-cb0f-4830-a709-1ebad64bb005-kube-api-access-65xsp\") pod \"multus-additional-cni-plugins-hgr4r\" (UID: \"08165218-cb0f-4830-a709-1ebad64bb005\") " pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.581344 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwgk\" (UniqueName: \"kubernetes.io/projected/c96a4b30-dced-4bf8-8f46-348c1b8972b3-kube-api-access-2fwgk\") pod \"multus-ld6q8\" (UID: \"c96a4b30-dced-4bf8-8f46-348c1b8972b3\") " pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.582755 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.595242 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.607427 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.623712 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.637795 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.654344 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.680259 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ld6q8" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.690558 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:05:35 crc kubenswrapper[4729]: W0127 14:05:35.698991 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc96a4b30_dced_4bf8_8f46_348c1b8972b3.slice/crio-d7aa231e54e30722ea0e91db93ce26c0f68d5f4289741046eb71ddc0d8868f27 WatchSource:0}: Error finding container d7aa231e54e30722ea0e91db93ce26c0f68d5f4289741046eb71ddc0d8868f27: Status 404 returned error can't find the container with id d7aa231e54e30722ea0e91db93ce26c0f68d5f4289741046eb71ddc0d8868f27 Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.702163 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" Jan 27 14:05:35 crc kubenswrapper[4729]: W0127 14:05:35.714530 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08165218_cb0f_4830_a709_1ebad64bb005.slice/crio-a6058cc1ad41b93ccad7b6f80672ea40c381445a6df9b2a66b860ce794912bd8 WatchSource:0}: Error finding container a6058cc1ad41b93ccad7b6f80672ea40c381445a6df9b2a66b860ce794912bd8: Status 404 returned error can't find the container with id a6058cc1ad41b93ccad7b6f80672ea40c381445a6df9b2a66b860ce794912bd8 Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.757286 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757416 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:05:37.757387573 +0000 UTC m=+24.341578577 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.757454 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.757497 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.757519 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.757543 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757700 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757718 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757731 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757772 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:37.757762825 +0000 UTC m=+24.341953829 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757802 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757813 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757863 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757945 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:37.75792074 +0000 UTC m=+24.342111814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.757896 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.758003 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.758028 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:37.757996542 +0000 UTC m=+24.342187546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:35 crc kubenswrapper[4729]: E0127 14:05:35.758053 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:37.758044913 +0000 UTC m=+24.342235917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.790530 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9l5t6"] Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.792924 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.799704 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.799939 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.800074 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.800173 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.800460 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.801155 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.801313 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.815008 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.834500 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.850231 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858194 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858237 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858308 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858345 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858389 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858498 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v24tk\" (UniqueName: \"kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858518 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858539 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858692 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858723 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858752 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858807 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858833 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858919 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858962 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.858987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.865583 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5b321c1a65ac51d3f156e1213146d703be5839555117074b8aa7f5b79c9a938\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:28Z\\\",\\\"message\\\":\\\"W0127 14:05:17.327320 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 14:05:17.327599 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769522717 cert, and key in /tmp/serving-cert-3115967305/serving-signer.crt, /tmp/serving-cert-3115967305/serving-signer.key\\\\nI0127 14:05:17.606084 1 observer_polling.go:159] Starting file observer\\\\nW0127 14:05:17.608618 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 14:05:17.608736 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:17.609370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3115967305/tls.crt::/tmp/serving-cert-3115967305/tls.key\\\\\\\"\\\\nF0127 14:05:28.129978 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.882151 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.900244 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.914046 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.928574 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.945392 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.958849 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959375 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959425 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959444 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959461 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959485 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959500 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959513 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959531 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959546 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v24tk\" (UniqueName: \"kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959560 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959574 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959589 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959589 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959715 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960328 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959603 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960354 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960392 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960417 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960421 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960444 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960473 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960471 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960493 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960505 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960526 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.959544 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960551 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960611 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960679 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960742 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960763 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960798 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.960826 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.961311 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.966637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.976077 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:35Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:35 crc kubenswrapper[4729]: I0127 14:05:35.977418 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v24tk\" (UniqueName: \"kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk\") pod \"ovnkube-node-9l5t6\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.004340 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.013297 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:30:36.9792366 +0000 UTC Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.050966 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:36 crc kubenswrapper[4729]: E0127 14:05:36.051106 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.051187 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:36 crc kubenswrapper[4729]: E0127 14:05:36.051241 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.051335 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:36 crc kubenswrapper[4729]: E0127 14:05:36.051384 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.055322 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.056128 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.056924 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.057580 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.058329 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.058907 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.059587 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.060347 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.062492 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.063023 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.063551 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.064562 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.065123 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.066123 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.066604 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.067472 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.068028 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.068386 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.070190 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.070782 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.071237 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.072181 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.072610 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.074706 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.075316 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.076378 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.077054 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.078990 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.079547 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.080084 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.080956 4729 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.081055 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.082785 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.083707 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.084197 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.085673 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.088175 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.088737 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.089790 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.090467 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.091289 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.091871 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.092813 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.093482 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.094333 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.094842 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.095709 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.096404 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.097283 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.097744 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.098572 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.099153 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.099704 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.100791 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.108275 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.217487 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.220537 4729 scope.go:117] "RemoveContainer" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" Jan 27 14:05:36 crc kubenswrapper[4729]: E0127 14:05:36.220994 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.221489 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"c18dc5ba54bfb24826d2d958d5d371d7f283f927858b308e511e56377891c73d"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.224564 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerStarted","Data":"743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.224646 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerStarted","Data":"a6058cc1ad41b93ccad7b6f80672ea40c381445a6df9b2a66b860ce794912bd8"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.229459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.229545 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.229569 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"75e8cae8541b6a8f834d78e511616ee9b1be87abf2e862805803d66623b6f5ef"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.231220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerStarted","Data":"76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.231262 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerStarted","Data":"d7aa231e54e30722ea0e91db93ce26c0f68d5f4289741046eb71ddc0d8868f27"} Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.238136 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.254859 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.266366 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.278057 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.292569 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.304326 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.315584 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.330119 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.349611 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.375896 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.390539 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.403428 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.418399 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.431275 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.443480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.458037 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.472211 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.485997 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.499888 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.512300 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.524416 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.536337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.553052 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.571051 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.785937 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.799922 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.802788 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.808247 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.816812 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.831732 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.840678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.855968 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.867046 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.878869 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.892490 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.908884 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.931233 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.947682 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.961676 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.972732 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.983996 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:36 crc kubenswrapper[4729]: I0127 14:05:36.993948 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:36Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.008058 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.014046 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:59:47.355026243 +0000 UTC Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.019670 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.034289 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.051510 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.064260 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.074437 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.092594 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.104274 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.115648 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.152382 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.236479 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55"} Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.237835 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc" exitCode=0 Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.237979 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc"} Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.239838 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4" exitCode=0 Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.239903 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4"} Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.256376 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.270972 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.285920 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.314776 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.379317 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.421640 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.441277 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.473695 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.518788 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.552502 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.594257 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.644830 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.675678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.721915 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.756189 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.781061 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.781163 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.781186 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.781204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.781223 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781331 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781344 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781356 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781388 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:41.781376389 +0000 UTC m=+28.365567393 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781437 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781463 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:41.781455881 +0000 UTC m=+28.365646875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781467 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781528 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:41.781511433 +0000 UTC m=+28.365702437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781468 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781555 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781569 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781637 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:41.781597026 +0000 UTC m=+28.365788120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:37 crc kubenswrapper[4729]: E0127 14:05:37.781772 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:05:41.781703159 +0000 UTC m=+28.365894283 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.806281 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.838148 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.876715 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.921403 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.954770 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:37 crc kubenswrapper[4729]: I0127 14:05:37.996931 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.005108 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9l9wv"] Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.005520 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.014440 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:31:04.666117448 +0000 UTC Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.026353 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.045569 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.050862 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:38 crc kubenswrapper[4729]: E0127 14:05:38.051067 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.051103 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.051166 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:38 crc kubenswrapper[4729]: E0127 14:05:38.051173 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:38 crc kubenswrapper[4729]: E0127 14:05:38.051315 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.065061 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.085010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d187af1-26d7-49c1-b74a-cd8dab606cd7-host\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.085081 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78cw\" (UniqueName: \"kubernetes.io/projected/2d187af1-26d7-49c1-b74a-cd8dab606cd7-kube-api-access-h78cw\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.085148 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d187af1-26d7-49c1-b74a-cd8dab606cd7-serviceca\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.085364 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.112243 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.156682 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.185788 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d187af1-26d7-49c1-b74a-cd8dab606cd7-serviceca\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.185912 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d187af1-26d7-49c1-b74a-cd8dab606cd7-host\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.185961 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h78cw\" (UniqueName: \"kubernetes.io/projected/2d187af1-26d7-49c1-b74a-cd8dab606cd7-kube-api-access-h78cw\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.186050 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2d187af1-26d7-49c1-b74a-cd8dab606cd7-host\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.187046 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2d187af1-26d7-49c1-b74a-cd8dab606cd7-serviceca\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.199540 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.220082 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h78cw\" (UniqueName: \"kubernetes.io/projected/2d187af1-26d7-49c1-b74a-cd8dab606cd7-kube-api-access-h78cw\") pod \"node-ca-9l9wv\" (UID: \"2d187af1-26d7-49c1-b74a-cd8dab606cd7\") " pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252417 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252489 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252502 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252514 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252526 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.252536 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.255013 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c" exitCode=0 Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.255423 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c"} Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.262362 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.293609 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.320068 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9l9wv" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.336324 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: W0127 14:05:38.340082 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d187af1_26d7_49c1_b74a_cd8dab606cd7.slice/crio-ea9307799ab6540886d2466ac9cbf00b4782961dbdc11b803e19b2837a34a858 WatchSource:0}: Error finding container ea9307799ab6540886d2466ac9cbf00b4782961dbdc11b803e19b2837a34a858: Status 404 returned error can't find the container with id ea9307799ab6540886d2466ac9cbf00b4782961dbdc11b803e19b2837a34a858 Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.376050 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.413840 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.454015 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.495347 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.535270 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.577158 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.618334 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.657948 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.695238 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.753716 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.776319 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.815168 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:38 crc kubenswrapper[4729]: I0127 14:05:38.857697 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:38Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.014948 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:12:31.296733434 +0000 UTC Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.261207 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9l9wv" event={"ID":"2d187af1-26d7-49c1-b74a-cd8dab606cd7","Type":"ContainerStarted","Data":"35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d"} Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.261258 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9l9wv" event={"ID":"2d187af1-26d7-49c1-b74a-cd8dab606cd7","Type":"ContainerStarted","Data":"ea9307799ab6540886d2466ac9cbf00b4782961dbdc11b803e19b2837a34a858"} Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.265239 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483" exitCode=0 Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.265326 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483"} Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.289679 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.313731 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.333971 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.349667 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.363187 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.375677 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.394948 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.410234 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.420346 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.433714 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.446480 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.458668 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.475457 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.488080 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.488827 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.493425 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.497930 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.500641 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.513200 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.554226 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.595297 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.639795 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.675396 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.713068 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.758810 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.797994 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.835757 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.876945 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.917960 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:39 crc kubenswrapper[4729]: I0127 14:05:39.959359 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:39Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.002439 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.015161 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:38:23.01707931 +0000 UTC Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.042967 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.050605 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.050688 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.050723 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.050612 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.050870 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.050971 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.076415 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.117837 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.133495 4729 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.135254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.135288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.135301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.135423 4729 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.154411 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.207253 4729 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.207550 4729 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.208612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.208634 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.208645 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.208667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.208679 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.220733 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.224190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.224264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.224282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.224303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.224319 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.235319 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.242682 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.246111 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.246252 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.246342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.246410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.246471 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.258213 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.261327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.261388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.261400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.261418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.261429 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.270076 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a" exitCode=0 Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.270153 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.273388 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf"} Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.276604 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.277508 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.279855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.279904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.279918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.279935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.279948 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.292317 4729 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.293947 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: E0127 14:05:40.294051 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.295605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.295644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.295652 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.295669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.295679 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.334269 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.381506 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.398343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.398380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.398392 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.398405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.398415 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.420294 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.455774 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.495865 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.500749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.500781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.500795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.500812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.500827 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.537799 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.573494 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.603585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.603623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.603631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.603647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.603656 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.612366 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.655129 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.699356 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.705643 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.705682 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.705694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.705712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.705723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.735133 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.774513 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.809040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.809083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.809092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.809108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.809121 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.821903 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.856975 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.892723 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.911867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.911914 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.911922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.911938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.911947 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:40Z","lastTransitionTime":"2026-01-27T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.941927 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:40 crc kubenswrapper[4729]: I0127 14:05:40.993699 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:40Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.012667 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.014320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.014427 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.014493 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.014555 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.014611 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.018109 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:11:44.116888599 +0000 UTC Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.053181 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.093925 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.117846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.117912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.117925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.117945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.117957 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.137842 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.182488 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.219460 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.220560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.220596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.220607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.220622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.220632 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.255649 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.278602 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec" exitCode=0 Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.278911 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.301111 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.323429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.323482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.323496 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.323543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.323557 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.339566 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.373402 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.414081 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.426400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.426430 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.426438 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.426450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.426460 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.460050 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.495659 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.528830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.528900 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.528910 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.528930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.528942 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.533653 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.573283 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.615141 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.631489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.631530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.631538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.631554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.631566 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.660264 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.700748 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.733856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.733935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.733951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.733971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.733985 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.734950 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.775687 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.811591 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.812806 4729 scope.go:117] "RemoveContainer" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.813065 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.817106 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.818440 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.818592 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818623 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.818595684 +0000 UTC m=+36.402786688 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.818659 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.818704 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.818749 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818858 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818933 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819009 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818892 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818863 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819079 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.819056608 +0000 UTC m=+36.403247632 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.818896 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819106 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.819095509 +0000 UTC m=+36.403286523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819109 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819126 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.8191167 +0000 UTC m=+36.403307714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819128 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:41 crc kubenswrapper[4729]: E0127 14:05:41.819184 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.819168662 +0000 UTC m=+36.403359846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.836380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.836424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.836437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.836455 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.836468 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.855153 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.938932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.938986 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.938999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.939019 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:41 crc kubenswrapper[4729]: I0127 14:05:41.939035 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:41Z","lastTransitionTime":"2026-01-27T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.018727 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:19:20.65112148 +0000 UTC Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.041930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.041967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.041975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.041994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.042005 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.050153 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.050203 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:42 crc kubenswrapper[4729]: E0127 14:05:42.050251 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.050262 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:42 crc kubenswrapper[4729]: E0127 14:05:42.050353 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:42 crc kubenswrapper[4729]: E0127 14:05:42.050515 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.144790 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.145170 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.145185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.145207 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.145221 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.248239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.248576 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.248644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.248733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.248802 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.266470 4729 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.286717 4729 generic.go:334] "Generic (PLEG): container finished" podID="08165218-cb0f-4830-a709-1ebad64bb005" containerID="2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88" exitCode=0 Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.286791 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerDied","Data":"2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.291750 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.292089 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.302421 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.316359 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.318484 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.333700 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.348058 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.352252 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.352380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.352453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.352552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.352652 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.360701 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.375646 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.393511 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.413280 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.428742 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.445303 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.455922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.455956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.455967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.455983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.455996 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.468574 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.482509 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.495479 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.508389 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.520657 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.534416 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.552164 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.558788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.558826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.558837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.558853 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.558865 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.594170 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.641323 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.661095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.661131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.661140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.661155 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.661166 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.674496 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.713832 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.756003 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.765492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.765539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.765550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.765571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.765583 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.793195 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.832763 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.867285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.867346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.867376 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.867395 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.867407 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.870960 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.911826 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.954198 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.970268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.970327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.970345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.970371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.970387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:42Z","lastTransitionTime":"2026-01-27T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:42 crc kubenswrapper[4729]: I0127 14:05:42.991439 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:42Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.019384 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:01:05.07273083 +0000 UTC Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.034535 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.072560 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.072592 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.072601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.072615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.072624 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.079433 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.175023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.175069 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.175081 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.175108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.175127 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.277436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.277480 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.277491 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.277512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.277529 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.300152 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" event={"ID":"08165218-cb0f-4830-a709-1ebad64bb005","Type":"ContainerStarted","Data":"4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.300217 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.300843 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.316093 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.328964 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.331482 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.349046 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.363798 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.375853 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.379295 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.379330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.379340 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.379356 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.379368 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.385605 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.398343 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.407639 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.431269 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.474836 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.481606 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.481639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.481652 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.481671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.481684 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.513616 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.555025 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.584542 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.584586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.584598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.584616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.584628 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.597853 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.634623 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.672253 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.687308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.687342 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.687351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.687366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.687377 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.713089 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.753751 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.790321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.790364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.790373 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.790391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.790409 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.801822 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.834375 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.872869 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.893341 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.893395 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.893409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.893431 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.893455 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.913705 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.953688 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.993831 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.995303 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.995338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.995350 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.995369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:43 crc kubenswrapper[4729]: I0127 14:05:43.995380 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:43Z","lastTransitionTime":"2026-01-27T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.020079 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:57:12.025518215 +0000 UTC Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.033590 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.050039 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.050106 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:44 crc kubenswrapper[4729]: E0127 14:05:44.050169 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:44 crc kubenswrapper[4729]: E0127 14:05:44.050257 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.050489 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:44 crc kubenswrapper[4729]: E0127 14:05:44.050655 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.073899 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.097355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.097391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.097401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.097416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.097428 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.114147 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.150963 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.193218 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.200233 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.200290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.200300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.200316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.200326 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.235551 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.281181 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.301909 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.302382 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.302448 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.302535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.302602 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.302665 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.312934 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.351820 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.393776 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.405916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.405967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.405981 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.405998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.406012 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.433810 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.473040 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.508315 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.508385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.508397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.508415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.508427 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.520075 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.556985 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.565769 4729 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.610402 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.610444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.610454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.610469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.610481 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.615223 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.652393 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.695449 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.712527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.712590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.712609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.712629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.712642 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.732553 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.772025 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.813453 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.814821 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.814860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.814869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.814898 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.814909 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.858523 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.895558 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.917465 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.917512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.917525 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.917545 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.917561 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:44Z","lastTransitionTime":"2026-01-27T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:44 crc kubenswrapper[4729]: I0127 14:05:44.945096 4729 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.019548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.019675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.019695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.019726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.019738 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.020959 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 22:03:25.089028603 +0000 UTC Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.093696 4729 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.122577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.122623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.122633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.122651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.122662 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.225465 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.225506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.225519 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.225537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.225550 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.305659 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.328410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.328457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.328466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.328484 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.328501 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.430962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.431003 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.431012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.431028 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.431038 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.533863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.533971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.534054 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.534087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.534109 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.636260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.636374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.636391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.636416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.636434 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.739125 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.739157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.739168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.739182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.739191 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.842018 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.842067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.842078 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.842093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.842102 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.944754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.944824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.944849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.944919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:45 crc kubenswrapper[4729]: I0127 14:05:45.944947 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:45Z","lastTransitionTime":"2026-01-27T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.021620 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:17:37.223308421 +0000 UTC Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.048249 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.048332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.048355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.048382 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.048405 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.050725 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.050813 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.051026 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:46 crc kubenswrapper[4729]: E0127 14:05:46.051025 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:46 crc kubenswrapper[4729]: E0127 14:05:46.051278 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:46 crc kubenswrapper[4729]: E0127 14:05:46.051461 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.151376 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.151424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.151440 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.151458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.151469 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.254660 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.254741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.254760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.254786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.254806 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.311036 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/0.log" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.313823 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2" exitCode=1 Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.313868 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.314555 4729 scope.go:117] "RemoveContainer" containerID="9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.328392 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.349406 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.358644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.358738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.358753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.358772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.358784 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.362585 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.374488 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.392079 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.410746 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.449847 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.461659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.461708 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.461720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.461740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.461752 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.468855 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.479689 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.490271 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.506120 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.518467 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.529352 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.541433 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.554519 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:46Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.564220 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.564268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.564278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.564290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.564299 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.669096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.669131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.669140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.669154 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.669164 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.772001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.772042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.772052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.772067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.772076 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.873907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.873951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.873961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.873978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.873989 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.975966 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.976006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.976017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.976033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:46 crc kubenswrapper[4729]: I0127 14:05:46.976046 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:46Z","lastTransitionTime":"2026-01-27T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.022756 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:08:32.421303565 +0000 UTC Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.077529 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.077564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.077572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.077587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.077595 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.180485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.180518 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.180529 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.180543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.180554 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.282956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.282995 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.283004 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.283018 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.283030 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.318183 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/0.log" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.325529 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.325693 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.348811 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.364991 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.378089 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.385512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.385538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.385548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.385562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.385571 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.393204 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.406447 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.419299 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.431444 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.459974 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.471670 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.482239 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.487578 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.487622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.487633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.487650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.487661 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.494098 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.511650 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.524968 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.534941 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.551240 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.591305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.591358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.591371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.591389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.591403 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.603258 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr"] Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.603694 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.605887 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.605921 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.621693 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.635115 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.651656 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.665830 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.674966 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqs5n\" (UniqueName: \"kubernetes.io/projected/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-kube-api-access-pqs5n\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.675008 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.675046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.675067 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.683826 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.694050 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.694083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.694095 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.694108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.694116 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.696854 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.708204 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.716549 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.729376 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.740738 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.753643 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.767080 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.776124 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqs5n\" (UniqueName: \"kubernetes.io/projected/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-kube-api-access-pqs5n\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.776182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.776228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.776260 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.776824 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.777168 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.783904 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.788739 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.794728 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqs5n\" (UniqueName: \"kubernetes.io/projected/36ed3984-2bfa-44db-8ef3-985fc2abbeb0-kube-api-access-pqs5n\") pod \"ovnkube-control-plane-749d76644c-tb4tr\" (UID: \"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.796773 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.796817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.796828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.796845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.796857 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.805936 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.817421 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.832462 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.899566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.899632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.899650 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.899673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.899689 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:47Z","lastTransitionTime":"2026-01-27T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:47 crc kubenswrapper[4729]: I0127 14:05:47.916346 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" Jan 27 14:05:47 crc kubenswrapper[4729]: W0127 14:05:47.931546 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ed3984_2bfa_44db_8ef3_985fc2abbeb0.slice/crio-c215ad367c617aed8c693356b167bff280b102c5fa48802ba79a88923473dc84 WatchSource:0}: Error finding container c215ad367c617aed8c693356b167bff280b102c5fa48802ba79a88923473dc84: Status 404 returned error can't find the container with id c215ad367c617aed8c693356b167bff280b102c5fa48802ba79a88923473dc84 Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.002548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.002602 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.002614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.002632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.002645 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.023993 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:25:20.198246657 +0000 UTC Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.050287 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.050291 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.050454 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.050552 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.050760 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.050846 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.104862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.104938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.104951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.104971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.104983 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.208110 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.208169 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.208181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.208199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.208210 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.310423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.310476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.310492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.310513 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.310528 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.332355 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" event={"ID":"36ed3984-2bfa-44db-8ef3-985fc2abbeb0","Type":"ContainerStarted","Data":"6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.332399 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" event={"ID":"36ed3984-2bfa-44db-8ef3-985fc2abbeb0","Type":"ContainerStarted","Data":"c215ad367c617aed8c693356b167bff280b102c5fa48802ba79a88923473dc84"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.333857 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/1.log" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.335224 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/0.log" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.338183 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460" exitCode=1 Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.338228 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.338267 4729 scope.go:117] "RemoveContainer" containerID="9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.339073 4729 scope.go:117] "RemoveContainer" containerID="7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.339211 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.351568 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.363127 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.376305 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.388896 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.401238 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.413331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.413370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.413383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.413401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.413413 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.422025 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.434505 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.446006 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.459740 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.472648 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.484244 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.496634 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.509025 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.515905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.515948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.515961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.515978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.515990 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.531042 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.548182 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.563061 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.618284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.618315 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.618327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.618343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.618355 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.691062 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-thlc7"] Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.691648 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.691780 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.704596 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.720120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.720149 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.720158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.720172 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.720181 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.723696 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.734961 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.749563 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.761720 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.775065 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.786006 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.786081 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrz4s\" (UniqueName: \"kubernetes.io/projected/c06c7af2-5a87-49e1-82ce-84aa16280c72-kube-api-access-mrz4s\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.798085 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.813359 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.822505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.822550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.822561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.822581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.822593 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.829724 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.840932 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.853755 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.865495 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.876482 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.887049 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.887109 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrz4s\" (UniqueName: \"kubernetes.io/projected/c06c7af2-5a87-49e1-82ce-84aa16280c72-kube-api-access-mrz4s\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.887243 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:48 crc kubenswrapper[4729]: E0127 14:05:48.887331 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:49.387310801 +0000 UTC m=+35.971501805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.889149 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.901433 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.904014 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrz4s\" (UniqueName: \"kubernetes.io/projected/c06c7af2-5a87-49e1-82ce-84aa16280c72-kube-api-access-mrz4s\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.914567 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.924785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.924828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.924839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.924858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.924870 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:48Z","lastTransitionTime":"2026-01-27T14:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:48 crc kubenswrapper[4729]: I0127 14:05:48.928044 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:48Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.024330 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:40:00.764745647 +0000 UTC Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.027126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.027164 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.027176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.027195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.027210 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.131461 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.131502 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.131512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.131527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.131539 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.234349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.234796 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.235043 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.235286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.235436 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.338615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.338661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.338672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.338690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.338705 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.345049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" event={"ID":"36ed3984-2bfa-44db-8ef3-985fc2abbeb0","Type":"ContainerStarted","Data":"66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.347477 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/1.log" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.366867 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.390495 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.392066 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.392480 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.392573 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:50.392550425 +0000 UTC m=+36.976741429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.401825 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.416441 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.426160 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.441273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.441327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.441337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.441352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.441362 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.446110 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.461252 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.472684 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.483620 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.497429 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.506617 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.518408 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.529863 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.539589 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.544422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.544471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.544488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.544510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.544529 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.550942 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.560793 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.572765 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:49Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.648620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.648675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.648688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.648707 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.648717 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.751344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.751390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.751404 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.751423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.751436 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.854193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.854246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.854256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.854279 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.854291 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.897183 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897331 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:06:05.897300863 +0000 UTC m=+52.481491897 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.897396 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.897433 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.897462 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.897498 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897632 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897639 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897662 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897676 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897689 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:05.897674614 +0000 UTC m=+52.481865638 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897711 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:05.897700955 +0000 UTC m=+52.481891959 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897736 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897780 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897798 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897886 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:05.89784602 +0000 UTC m=+52.482037024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.897743 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: E0127 14:05:49.898054 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:05.898025105 +0000 UTC m=+52.482216109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.957127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.957171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.957181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.957196 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:49 crc kubenswrapper[4729]: I0127 14:05:49.957209 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:49Z","lastTransitionTime":"2026-01-27T14:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.025506 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:12:51.078828265 +0000 UTC Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.049961 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.050106 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.050206 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.050430 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.050430 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.050539 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.050695 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.050820 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.059122 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.059162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.059179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.059198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.059211 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.161716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.161794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.161826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.161854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.161917 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.265151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.265211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.265221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.265238 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.265248 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.367406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.367457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.367471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.367495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.367510 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.402735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.402930 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.403034 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:52.40300177 +0000 UTC m=+38.987192804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.469977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.470035 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.470048 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.470065 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.470078 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.572557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.572615 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.572628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.572647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.572660 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.617422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.617509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.617520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.617534 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.617543 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.631944 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:50Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.634921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.634972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.634982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.634998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.635008 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.645440 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:50Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.648681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.648716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.648727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.648742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.648753 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.661124 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:50Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.666702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.666771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.666785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.666802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.666819 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.681144 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:50Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.684761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.684870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.684945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.685007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.685065 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.695709 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:50Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:50 crc kubenswrapper[4729]: E0127 14:05:50.695950 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.697136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.697236 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.697252 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.697270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.697285 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.799800 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.799857 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.799899 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.799919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.799932 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.902221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.902267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.902277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.902292 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:50 crc kubenswrapper[4729]: I0127 14:05:50.902303 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:50Z","lastTransitionTime":"2026-01-27T14:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.005178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.005226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.005245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.005268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.005281 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.025651 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:44:48.396053092 +0000 UTC Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.108593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.108640 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.108651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.108668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.108682 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.211792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.211845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.211856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.211895 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.211911 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.314562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.314605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.314616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.314631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.314642 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.418156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.418246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.418260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.418293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.418308 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.520549 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.520592 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.520604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.520622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.520635 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.622587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.622620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.622629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.622642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.622651 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.725661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.725719 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.725732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.725752 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.725765 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.829377 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.829464 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.829476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.829496 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.829506 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.932309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.932344 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.932353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.932367 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:51 crc kubenswrapper[4729]: I0127 14:05:51.932376 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:51Z","lastTransitionTime":"2026-01-27T14:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.026682 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:25:21.970639932 +0000 UTC Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.034551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.034594 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.034603 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.034620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.034631 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.050082 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.050125 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.050150 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.050112 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.050236 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.050329 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.050409 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.050493 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.136754 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.136797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.136810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.136826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.136836 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.239566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.239649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.239659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.239672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.239682 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.342625 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.342671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.342681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.342697 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.342710 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.423224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.423381 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:52 crc kubenswrapper[4729]: E0127 14:05:52.423472 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:05:56.423454558 +0000 UTC m=+43.007645562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.445307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.445338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.445347 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.445359 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.445369 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.547688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.547734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.547749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.547768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.547780 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.654051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.654093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.654102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.654116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.654127 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.756783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.756818 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.756826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.756841 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.756851 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.858588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.858636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.858645 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.858660 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.858671 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.960810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.960844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.960852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.960888 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:52 crc kubenswrapper[4729]: I0127 14:05:52.960900 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:52Z","lastTransitionTime":"2026-01-27T14:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.026802 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 07:21:38.152521466 +0000 UTC Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.051108 4729 scope.go:117] "RemoveContainer" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.063685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.063723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.063734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.063751 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.063763 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.166299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.166345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.166357 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.166372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.166383 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.268369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.268409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.268420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.268433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.268443 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.368251 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.369944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.369974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.369983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.369996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.370005 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.370689 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.370977 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.383025 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.394424 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.404733 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.415903 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.426003 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.434602 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.445735 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.462107 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.472504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.472537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.472548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.472565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.472573 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.474971 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.488317 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.502250 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.523381 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.538410 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.553119 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.566406 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.575662 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.575712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.575721 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.575736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.575746 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.581230 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.594380 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:53Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.679039 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.679134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.679148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.679171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.679183 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.781791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.781833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.781843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.781858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.781867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.884507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.884590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.884604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.884633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.884651 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.990541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.990664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.990749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.990803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:53 crc kubenswrapper[4729]: I0127 14:05:53.990853 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:53Z","lastTransitionTime":"2026-01-27T14:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.027121 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 11:01:42.88865801 +0000 UTC Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.053646 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:54 crc kubenswrapper[4729]: E0127 14:05:54.053807 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.054204 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:54 crc kubenswrapper[4729]: E0127 14:05:54.054282 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.054345 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:54 crc kubenswrapper[4729]: E0127 14:05:54.054411 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.054465 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:54 crc kubenswrapper[4729]: E0127 14:05:54.054527 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.077047 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094097 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094194 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094250 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.094814 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.113712 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.138113 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.150407 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.163886 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.184277 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.196797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.197022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.197121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.197190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.197258 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.210452 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9613ae045f9ecfd893ccabb5f7dda043365781cbccdf812523db0aa8cfc4d0a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:45Z\\\",\\\"message\\\":\\\"I0127 14:05:45.458097 6008 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 14:05:45.458104 6008 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 14:05:45.458110 6008 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 14:05:45.458116 6008 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 14:05:45.458121 6008 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 14:05:45.458129 6008 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0127 14:05:45.458134 6008 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 14:05:45.458276 6008 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458361 6008 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458433 6008 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458497 6008 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458550 6008 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 14:05:45.458556 6008 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.224945 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.238027 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.249913 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.269586 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.283799 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.295923 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.298896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.298933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.298944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.298959 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.298970 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.312187 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.328453 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.341012 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:54Z is after 2025-08-24T17:21:41Z" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.401485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.401553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.401564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.401579 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.401590 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.504320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.504382 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.504394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.504433 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.504445 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.607520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.607564 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.607585 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.607604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.607618 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.710302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.710339 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.710349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.710371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.710385 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.813588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.813636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.813648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.813666 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.813679 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.917057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.917108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.917120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.917141 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:54 crc kubenswrapper[4729]: I0127 14:05:54.917153 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:54Z","lastTransitionTime":"2026-01-27T14:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.020075 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.020115 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.020126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.020141 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.020152 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.027680 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:05:52.837617544 +0000 UTC Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.122944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.122995 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.123010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.123031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.123046 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.225145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.225180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.225188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.225201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.225209 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.328284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.328334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.328345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.328364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.328374 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.430662 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.430690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.430698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.430712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.430721 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.533701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.533771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.533783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.533807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.533826 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.636624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.636675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.636684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.636744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.636775 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.740013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.740071 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.740083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.740104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.740120 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.844015 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.844090 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.844103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.844162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.844174 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.947583 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.947648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.947692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.947720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:55 crc kubenswrapper[4729]: I0127 14:05:55.947737 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:55Z","lastTransitionTime":"2026-01-27T14:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.027819 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 01:30:23.471585879 +0000 UTC Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.050030 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.050089 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.050165 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.050247 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.050398 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.050389 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.050483 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.050545 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.051741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.051797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.051807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.051826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.051837 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.154997 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.155067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.155077 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.155096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.155107 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.257707 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.257735 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.257743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.257757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.257767 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.360688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.360745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.360756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.360771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.360780 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464039 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464081 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464091 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464124 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.464766 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.464930 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:56 crc kubenswrapper[4729]: E0127 14:05:56.465013 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:04.464995555 +0000 UTC m=+51.049186559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.566799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.566864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.566923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.566951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.566972 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.669591 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.669629 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.669655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.669669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.669679 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.773295 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.773346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.773360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.773389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.773402 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.876153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.876225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.876250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.876275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.876292 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.978910 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.979251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.979377 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.979451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:56 crc kubenswrapper[4729]: I0127 14:05:56.979544 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:56Z","lastTransitionTime":"2026-01-27T14:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.028646 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:00:37.664577006 +0000 UTC Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.082749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.082803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.082815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.082847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.082867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.185831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.185912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.185929 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.185952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.185967 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.288245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.288306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.288320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.288341 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.288357 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.390349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.390385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.390395 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.390408 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.390418 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.492201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.492250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.492262 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.492283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.492297 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.595580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.595651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.595670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.595698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.595718 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.698657 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.698698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.698709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.698727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.698739 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.801911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.801973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.801985 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.802006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.802020 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.905383 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.905430 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.905442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.905459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:57 crc kubenswrapper[4729]: I0127 14:05:57.905470 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:57Z","lastTransitionTime":"2026-01-27T14:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.008825 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.008908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.008924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.008952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.008966 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.029394 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:50:23.030376082 +0000 UTC Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.051167 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.051306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.051371 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.051589 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:05:58 crc kubenswrapper[4729]: E0127 14:05:58.051576 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:05:58 crc kubenswrapper[4729]: E0127 14:05:58.051838 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:05:58 crc kubenswrapper[4729]: E0127 14:05:58.052041 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:05:58 crc kubenswrapper[4729]: E0127 14:05:58.052199 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.111614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.111669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.111681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.111698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.111707 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.214937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.215012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.215021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.215041 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.215053 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.319247 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.319308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.319324 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.319352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.319367 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.422926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.422977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.422992 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.423012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.423029 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.526088 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.526151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.526220 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.526285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.526304 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.629611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.629680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.629697 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.629727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.629751 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.732681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.732753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.732775 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.732801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.732820 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.838155 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.838237 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.838343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.838500 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.838518 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.941653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.941700 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.941713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.941731 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:58 crc kubenswrapper[4729]: I0127 14:05:58.941742 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:58Z","lastTransitionTime":"2026-01-27T14:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.030376 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:48:14.797560056 +0000 UTC Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.045411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.045458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.045470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.045489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.045502 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.149175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.149221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.149230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.149249 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.149261 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.252439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.252496 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.252510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.252545 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.252560 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.356062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.356120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.356136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.356162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.356180 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.459760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.459852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.459919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.459956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.459983 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.562658 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.562707 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.562718 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.562736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.562750 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.665980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.666036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.666051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.666070 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.666083 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.768600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.768644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.768656 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.768672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.768684 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.871406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.871458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.871468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.871492 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.871507 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.974010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.974049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.974065 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.974087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:05:59 crc kubenswrapper[4729]: I0127 14:05:59.974099 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:05:59Z","lastTransitionTime":"2026-01-27T14:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.031154 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:56:25.299887401 +0000 UTC Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.050564 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.050637 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.050585 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.050564 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.050830 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.051150 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.051254 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.051514 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.077294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.077348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.077362 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.077405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.077421 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.180231 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.180284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.180293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.180313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.180325 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.283107 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.283151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.283161 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.283176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.283188 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.386723 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.386794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.386805 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.386827 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.386839 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.489813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.489856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.489866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.489907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.489925 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.592927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.592974 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.592986 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.593002 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.593020 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.696329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.696396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.696409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.696434 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.696451 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.715273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.715308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.715317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.715332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.715342 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.727138 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.730693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.730719 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.730728 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.730742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.730752 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.746192 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.750307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.750351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.750364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.750384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.750397 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.764274 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.767832 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.767866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.767893 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.767912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.767942 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.781969 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.785148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.785177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.785191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.785207 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.785218 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.797649 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:00Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:00 crc kubenswrapper[4729]: E0127 14:06:00.797815 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.799291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.799327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.799338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.799354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.799365 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.901733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.901768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.901785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.901800 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:00 crc kubenswrapper[4729]: I0127 14:06:00.901811 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:00Z","lastTransitionTime":"2026-01-27T14:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.004581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.004637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.004649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.004669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.004681 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.031731 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:14:40.085211131 +0000 UTC Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.051206 4729 scope.go:117] "RemoveContainer" containerID="7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.066657 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.090342 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.104826 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.107285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.107321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.107333 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.107352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.107366 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.121891 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.137732 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.153662 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.167003 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.179346 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.196280 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.210177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.210232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.210245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.210265 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.210280 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.214376 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.228140 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.247560 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.265467 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.287328 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.300184 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.312915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.312966 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.312987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.313011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.313027 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.314093 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.327500 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:01Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.415390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.415419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.415428 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.415442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.415454 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.518814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.518866 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.518926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.518957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.518975 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.621776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.621819 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.621828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.621849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.621859 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.724580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.724613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.724621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.724637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.724646 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.827253 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.827301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.827313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.827330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.827342 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.930289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.930336 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.930345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.930390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:01 crc kubenswrapper[4729]: I0127 14:06:01.930401 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:01Z","lastTransitionTime":"2026-01-27T14:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.031900 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:21:42.800423524 +0000 UTC Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.033377 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.033419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.033434 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.033453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.033468 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.050839 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.051000 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.051079 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.051099 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:02 crc kubenswrapper[4729]: E0127 14:06:02.051002 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:02 crc kubenswrapper[4729]: E0127 14:06:02.051203 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:02 crc kubenswrapper[4729]: E0127 14:06:02.051327 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:02 crc kubenswrapper[4729]: E0127 14:06:02.051564 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.136203 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.136265 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.136285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.136308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.136336 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.239320 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.239353 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.239360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.239373 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.239383 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.342562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.342628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.342648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.342675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.342694 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.405825 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/2.log" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.406453 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/1.log" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.410586 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" exitCode=1 Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.410633 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.410668 4729 scope.go:117] "RemoveContainer" containerID="7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.411406 4729 scope.go:117] "RemoveContainer" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" Jan 27 14:06:02 crc kubenswrapper[4729]: E0127 14:06:02.411551 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.426465 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.441907 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.446738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.446762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.446770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.446784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.446795 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.457220 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.469909 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.482634 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.494586 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.507302 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.529327 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.547848 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.548902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.548943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.548954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.548973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.548985 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.560616 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.571319 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.582134 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.592745 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.601612 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.614267 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.631615 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7243a3ec5470ac4988f05fa891423b35e1ece57565da7686b3d4fdd0e1d3a460\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:05:47Z is after 2025-08-24T17:21:41Z]\\\\nI0127 14:05:47.272424 6158 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-ld6q8 in node crc\\\\nI0127 14:05:47.272433 6158 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-ld6q8 after 0 failed attempt(s)\\\\nI0127 14:05:47.270863 6158 services_controller.go:443] Built service openshift-kube-apiserver-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.109\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, int\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.643961 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:02Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.652160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.652209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.652221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.652241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.652253 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.754417 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.754450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.754458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.754471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.754481 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.856703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.856748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.856756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.856770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.856779 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.959138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.959175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.959186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.959200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.959208 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:02Z","lastTransitionTime":"2026-01-27T14:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:02 crc kubenswrapper[4729]: I0127 14:06:02.970640 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.032415 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 10:04:53.08220091 +0000 UTC Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.061747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.061794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.061804 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.061820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.061830 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.164127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.164177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.164188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.164210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.164222 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.267181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.267235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.267256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.267280 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.267299 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.369826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.369889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.369901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.369918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.369930 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.415584 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/2.log" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.419389 4729 scope.go:117] "RemoveContainer" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" Jan 27 14:06:03 crc kubenswrapper[4729]: E0127 14:06:03.419541 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.435064 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.457915 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.473102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.473143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.473157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.473174 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.473186 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.474774 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.493769 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.505639 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.523560 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.539678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.551438 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.564802 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.576029 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.576080 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.576094 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.576112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.576124 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.577921 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.589089 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.601803 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.612084 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.622040 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.639455 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.654461 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.667671 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:03Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.678858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.678941 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.678958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.678979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.678995 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.781457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.781494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.781511 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.781527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.781536 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.884557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.884607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.884620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.884637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.884650 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.987186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.987422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.987514 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.987612 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:03 crc kubenswrapper[4729]: I0127 14:06:03.987709 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:03Z","lastTransitionTime":"2026-01-27T14:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.033489 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:10:06.007030712 +0000 UTC Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.050346 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.050398 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.050370 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.050361 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.050522 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.050635 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.050706 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.050778 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.063511 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.072726 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.086756 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.090605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.090675 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.090699 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.090731 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.090756 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.099729 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.108858 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.118683 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.135298 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.152292 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.173645 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.191692 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.192377 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.192405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.192413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.192427 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.192439 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.202728 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.214575 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.224347 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.234442 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.245228 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.264984 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.280961 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:04Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.294815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.294845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.294853 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.294867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.294890 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.396649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.396692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.396700 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.396713 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.396723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.502785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.502901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.502917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.502935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.502949 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.551579 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.551730 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:04 crc kubenswrapper[4729]: E0127 14:06:04.551797 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:20.55177971 +0000 UTC m=+67.135970724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.605909 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.605957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.605969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.605986 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.605997 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.710148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.710211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.710222 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.710241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.710258 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.813219 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.813268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.813277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.813296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.813312 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.916056 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.916098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.916108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.916126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:04 crc kubenswrapper[4729]: I0127 14:06:04.916145 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:04Z","lastTransitionTime":"2026-01-27T14:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.019753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.019819 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.019839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.019913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.019946 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.034029 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 03:57:22.835438493 +0000 UTC Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.123396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.123454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.123469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.123490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.123503 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.226410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.226505 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.226514 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.226535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.226549 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.329281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.329349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.329364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.329384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.329397 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.431649 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.431716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.431728 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.431750 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.431766 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.534371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.534441 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.534452 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.534476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.534487 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.637291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.637379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.637398 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.637503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.637528 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.739667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.739721 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.739733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.739753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.739766 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.842982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.843072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.843086 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.843109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.843123 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.945979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.946085 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.946105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.946136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.946154 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:05Z","lastTransitionTime":"2026-01-27T14:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.969985 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.970211 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.970257 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.970311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970414 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970414 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:06:37.970314969 +0000 UTC m=+84.554506013 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970506 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: I0127 14:06:05.970554 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970567 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970714 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970502 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970614 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:37.970583987 +0000 UTC m=+84.554775151 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.970616 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.971041 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.971060 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.971109 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:37.971092802 +0000 UTC m=+84.555283836 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.971133 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:37.971122233 +0000 UTC m=+84.555313267 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:05 crc kubenswrapper[4729]: E0127 14:06:05.971159 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:37.971144754 +0000 UTC m=+84.555335788 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.034843 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:36:24.052028764 +0000 UTC Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.049985 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050141 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050004 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: E0127 14:06:06.050230 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050253 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: E0127 14:06:06.050343 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050362 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: E0127 14:06:06.050444 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.050651 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:06 crc kubenswrapper[4729]: E0127 14:06:06.050810 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.152613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.152683 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.152706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.152736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.152751 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.255653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.255712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.255725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.255746 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.255759 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.358475 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.358533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.358546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.358565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.358580 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.461210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.461299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.461354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.461387 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.461408 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.564813 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.564870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.564905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.564924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.564949 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.668141 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.668201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.668224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.668258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.668282 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.771690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.771745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.771770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.771792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.771808 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.874222 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.874284 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.874301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.874324 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.874344 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.976753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.976792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.976802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.976816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:06 crc kubenswrapper[4729]: I0127 14:06:06.976825 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:06Z","lastTransitionTime":"2026-01-27T14:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.035384 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:37:04.111072815 +0000 UTC Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.079712 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.079776 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.079795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.079817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.079834 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.182236 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.182301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.182323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.182364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.182396 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.284816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.284911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.284925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.284943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.284956 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.387660 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.387701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.387714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.387732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.387745 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.490223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.490283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.490297 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.490318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.490333 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.593076 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.593118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.593128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.593143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.593154 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.696052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.696128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.696143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.696171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.696192 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798467 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798942 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.798982 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.814337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.825084 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.842553 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.855337 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.869044 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.882163 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.896267 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.900808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.900841 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.900852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.900868 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.900893 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:07Z","lastTransitionTime":"2026-01-27T14:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.907989 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.917140 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.927912 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.942012 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.953657 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.965319 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.976310 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:07 crc kubenswrapper[4729]: I0127 14:06:07.989780 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:07Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.003029 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.003076 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.003086 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.003103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.003116 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.007229 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.017478 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.035612 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:30:14.707888358 +0000 UTC Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.053159 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.053247 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.053181 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:08 crc kubenswrapper[4729]: E0127 14:06:08.053389 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.053431 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:08 crc kubenswrapper[4729]: E0127 14:06:08.053528 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:08 crc kubenswrapper[4729]: E0127 14:06:08.053601 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:08 crc kubenswrapper[4729]: E0127 14:06:08.053649 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.104932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.104984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.104993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.105013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.105024 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.207864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.207933 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.207945 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.207963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.207973 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.309960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.310017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.310033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.310057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.310077 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.413201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.413287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.413296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.413319 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.413333 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.518379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.518419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.518429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.518445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.518455 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.621082 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.621124 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.621134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.621148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.621157 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.724025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.724085 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.724105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.724131 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.724149 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.827275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.827345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.827355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.827375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.827387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.929853 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.929946 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.929960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.929991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.930020 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:08Z","lastTransitionTime":"2026-01-27T14:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.977710 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 14:06:08 crc kubenswrapper[4729]: I0127 14:06:08.989843 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.001738 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:08Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.017912 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.032752 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.032844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.032864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.032915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.032931 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.034484 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.036608 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:59:35.710083221 +0000 UTC Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.048866 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.061825 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.072422 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.081638 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.094589 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.106572 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.117241 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.129817 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.135251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.135291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.135302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.135317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.135327 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.142048 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.157016 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.178312 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.191124 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.204080 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.216572 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:09Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.237733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.237770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.237783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.237801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.237813 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.340849 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.340946 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.340960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.340988 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.341007 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.443891 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.443952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.443965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.443982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.443992 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.546916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.546990 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.547006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.547031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.547049 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.650483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.650574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.650596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.650624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.650650 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.752926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.752960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.752972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.752987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.752997 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.854722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.854758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.854766 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.854781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.854791 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.957838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.957940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.957958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.957981 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:09 crc kubenswrapper[4729]: I0127 14:06:09.957996 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:09Z","lastTransitionTime":"2026-01-27T14:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.037062 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:12:10.494041801 +0000 UTC Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.050617 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.050617 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.050617 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.050647 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.050802 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.050988 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.051067 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.051151 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.061860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.061968 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.062028 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.062066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.062083 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.165375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.165422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.165439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.165458 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.165470 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.268287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.268332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.268347 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.268361 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.268373 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.371317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.371370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.371384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.371399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.371408 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.474742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.474803 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.474814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.474835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.474848 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.578356 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.578425 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.578442 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.578472 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.578489 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.681771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.681837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.681851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.681902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.681921 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.785460 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.785510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.785521 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.785542 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.785560 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.808547 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.808619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.808635 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.808654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.808672 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.821497 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.825630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.825668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.825678 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.825695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.825708 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.839123 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.843337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.843394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.843410 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.843435 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.843452 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.872255 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.877906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.877970 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.877985 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.878009 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.878025 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.901537 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.908335 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.908385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.908394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.908415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.908426 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.924684 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:10Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:10 crc kubenswrapper[4729]: E0127 14:06:10.924821 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.926964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.927022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.927032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.927049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:10 crc kubenswrapper[4729]: I0127 14:06:10.927062 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:10Z","lastTransitionTime":"2026-01-27T14:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.029523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.029571 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.029587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.029610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.029624 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.037739 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:14:02.647912011 +0000 UTC Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.132553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.132610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.132622 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.132639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.132651 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.235188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.235237 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.235249 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.235271 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.235283 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.338787 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.338919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.338943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.338967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.338981 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.442036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.442107 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.442119 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.442139 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.442154 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.544389 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.544439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.544452 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.544469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.544480 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.646859 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.646926 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.646937 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.646954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.646966 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.749648 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.749680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.749695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.749710 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.749721 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.851667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.851716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.851726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.851740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.851752 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.956239 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.956365 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.956378 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.956398 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:11 crc kubenswrapper[4729]: I0127 14:06:11.956418 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:11Z","lastTransitionTime":"2026-01-27T14:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.038622 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 06:19:06.413999734 +0000 UTC Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.049944 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.050042 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:12 crc kubenswrapper[4729]: E0127 14:06:12.050232 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.050306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.050369 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:12 crc kubenswrapper[4729]: E0127 14:06:12.050591 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:12 crc kubenswrapper[4729]: E0127 14:06:12.050761 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:12 crc kubenswrapper[4729]: E0127 14:06:12.050903 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.058503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.058539 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.058547 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.058561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.058573 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.161081 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.161154 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.161168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.161191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.161205 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.263596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.263653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.263669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.263691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.263705 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.367014 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.367065 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.367080 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.367102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.367118 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.470002 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.470051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.470068 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.470092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.470107 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.573164 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.573208 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.573224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.573245 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.573260 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.676605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.676660 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.676679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.676702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.676719 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.780689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.780748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.780770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.780833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.780856 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.884223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.884283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.884304 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.884330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.884347 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.987784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.987820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.987833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.987850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:12 crc kubenswrapper[4729]: I0127 14:06:12.987861 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:12Z","lastTransitionTime":"2026-01-27T14:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.039600 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:45:55.783596709 +0000 UTC Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.090931 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.090988 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.091008 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.091032 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.091048 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.194991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.195046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.195059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.195077 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.195088 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.298086 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.298180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.298204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.298234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.298259 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.400932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.400982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.401000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.401022 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.401038 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.503963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.504027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.504044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.504067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.504085 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.606999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.607093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.607108 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.607136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.607156 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.709562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.709598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.709610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.709628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.709640 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.812767 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.812817 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.812830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.812847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.812863 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.915676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.915717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.915726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.915742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:13 crc kubenswrapper[4729]: I0127 14:06:13.915752 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:13Z","lastTransitionTime":"2026-01-27T14:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.018137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.018191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.018203 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.018221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.018233 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.040007 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 02:03:50.015256403 +0000 UTC Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.050342 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.050352 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.050456 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.050512 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:14 crc kubenswrapper[4729]: E0127 14:06:14.050658 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:14 crc kubenswrapper[4729]: E0127 14:06:14.050753 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:14 crc kubenswrapper[4729]: E0127 14:06:14.050837 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:14 crc kubenswrapper[4729]: E0127 14:06:14.051014 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.076296 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.097591 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.111635 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.121191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.121286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.121311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.121345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.121369 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.134194 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.154136 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.167121 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.185662 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.205594 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.220755 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.223422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.223457 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.223465 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.223486 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.223496 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.239416 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.255241 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.267569 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.280925 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.299128 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.320696 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.331795 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.331869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.331901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.331923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.331938 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.341835 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.357340 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.371691 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:14Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.435000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.435042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.435053 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.435072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.435088 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.537114 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.537177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.537189 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.537209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.537223 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.640536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.640620 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.640638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.640696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.640724 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.743179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.743676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.743686 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.743703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.743715 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.848123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.848166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.848178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.848198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.848212 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.951574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.951647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.951664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.951689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:14 crc kubenswrapper[4729]: I0127 14:06:14.951706 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:14Z","lastTransitionTime":"2026-01-27T14:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.040933 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:28:24.950408353 +0000 UTC Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.054991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.055052 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.055068 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.055090 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.055106 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.157562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.157607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.157619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.157638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.157650 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.260673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.260722 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.260738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.260760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.260776 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.363703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.363748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.363758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.363775 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.363786 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.466542 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.466601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.466614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.466636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.466662 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.569919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.569975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.569984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.570000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.570010 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.673200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.673322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.673343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.673367 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.673379 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.776998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.777067 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.777082 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.777109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.777127 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.879743 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.879802 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.879815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.879833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.879845 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.982448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.982498 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.982515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.982537 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:15 crc kubenswrapper[4729]: I0127 14:06:15.982553 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:15Z","lastTransitionTime":"2026-01-27T14:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.041475 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:54:42.992914891 +0000 UTC Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.050556 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.050621 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.050715 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:16 crc kubenswrapper[4729]: E0127 14:06:16.050775 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.050810 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:16 crc kubenswrapper[4729]: E0127 14:06:16.050997 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:16 crc kubenswrapper[4729]: E0127 14:06:16.051162 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:16 crc kubenswrapper[4729]: E0127 14:06:16.051287 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.086843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.086934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.086951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.086977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.087003 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.189810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.189858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.189870 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.189902 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.189913 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.292173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.292224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.292236 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.292255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.292269 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.394405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.394454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.394466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.394482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.394493 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.496345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.496445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.496470 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.496501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.496522 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.598621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.598918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.599025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.599130 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.599238 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.701777 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.701830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.701843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.701864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.701896 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.805308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.805368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.805380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.805408 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.805424 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.908270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.908325 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.908338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.908359 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:16 crc kubenswrapper[4729]: I0127 14:06:16.908759 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:16Z","lastTransitionTime":"2026-01-27T14:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.011982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.012016 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.012027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.012042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.012052 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.042458 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:40:13.654094027 +0000 UTC Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.116089 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.116197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.116210 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.116234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.116249 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.219546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.219611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.219627 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.219651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.219664 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.322432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.322474 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.322488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.322504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.322517 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.424637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.424667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.424677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.424693 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.424704 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.526934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.526968 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.526979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.526994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.527004 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.630232 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.630264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.630275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.630291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.630302 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.733509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.733804 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.733906 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.734001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.734088 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.837862 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.838401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.838488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.838563 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.838636 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.942425 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.942807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.942919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.942989 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:17 crc kubenswrapper[4729]: I0127 14:06:17.943129 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:17Z","lastTransitionTime":"2026-01-27T14:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.042656 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 03:03:44.137423496 +0000 UTC Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.046346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.046400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.046415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.046445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.046461 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.051153 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:18 crc kubenswrapper[4729]: E0127 14:06:18.051341 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.051443 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.051544 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.051923 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:18 crc kubenswrapper[4729]: E0127 14:06:18.052015 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:18 crc kubenswrapper[4729]: E0127 14:06:18.052248 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:18 crc kubenswrapper[4729]: E0127 14:06:18.052345 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.053184 4729 scope.go:117] "RemoveContainer" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" Jan 27 14:06:18 crc kubenswrapper[4729]: E0127 14:06:18.053570 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.149453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.149520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.149531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.149557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.149575 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.257420 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.257556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.257762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.257831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.257864 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.360840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.360949 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.360966 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.360994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.361013 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.464315 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.464373 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.464382 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.464405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.464424 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.567631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.567679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.567691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.567709 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.567726 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.671146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.671208 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.671223 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.671251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.671269 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.774525 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.774577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.774588 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.774605 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.774616 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.878580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.878619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.878630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.878647 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.878659 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.980934 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.980971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.980981 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.980996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:18 crc kubenswrapper[4729]: I0127 14:06:18.981005 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:18Z","lastTransitionTime":"2026-01-27T14:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.043122 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:59:59.620273354 +0000 UTC Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.083481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.083759 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.083841 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.083978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.084066 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.186399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.186440 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.186454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.186469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.186480 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.289607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.289929 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.290013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.290083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.290166 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.392685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.392715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.392724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.392741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.392751 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.495281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.495337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.495346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.495366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.495381 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.599673 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.599735 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.599746 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.599770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.599786 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.703177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.703225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.703236 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.703257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.703272 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.805690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.806409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.806711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.806792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.806935 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.910003 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.910058 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.910074 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.910093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:19 crc kubenswrapper[4729]: I0127 14:06:19.910103 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:19Z","lastTransitionTime":"2026-01-27T14:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.012215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.012262 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.012273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.012288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.012300 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.043892 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 03:40:35.832803047 +0000 UTC Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.050488 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.050580 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.050635 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.050607 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.050934 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.050843 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.051056 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.051109 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.114913 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.114960 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.114973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.114992 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.115006 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.218075 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.218143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.218160 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.218186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.218205 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.320954 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.321025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.321043 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.321070 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.321094 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.424225 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.424288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.424302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.424322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.424333 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.526350 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.526385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.526401 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.526418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.526429 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.628921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.628984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.628996 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.629012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.629025 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.630668 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.630865 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:20 crc kubenswrapper[4729]: E0127 14:06:20.630973 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:06:52.630949991 +0000 UTC m=+99.215141005 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.731930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.731982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.732005 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.732025 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.732036 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.834613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.835081 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.835197 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.835309 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.835404 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.938831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.938898 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.938907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.938921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:20 crc kubenswrapper[4729]: I0127 14:06:20.938930 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:20Z","lastTransitionTime":"2026-01-27T14:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.041716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.041755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.041766 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.041780 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.041789 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.042964 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.042998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.043008 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.043030 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.043040 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.044074 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:21:58.984902532 +0000 UTC Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.054894 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.059385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.059414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.059423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.059436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.059447 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.061140 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.074008 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.077604 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.077679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.077691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.077715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.077728 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.091073 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.095240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.095273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.095285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.095317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.095328 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.107766 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.110905 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.110944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.110952 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.110968 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.110980 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.128016 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:21Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:21 crc kubenswrapper[4729]: E0127 14:06:21.128234 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.143670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.143733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.143749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.143768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.143783 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.245582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.245656 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.245668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.245684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.245695 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.348038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.348079 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.348090 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.348110 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.348125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.450031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.450064 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.450097 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.450114 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.450125 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.552132 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.552173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.552183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.552198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.552207 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.654278 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.654327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.654338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.654355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.654372 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.756372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.756444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.756455 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.756468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.756477 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.858737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.858779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.858791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.858807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.858819 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.961768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.962071 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.962152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.962221 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:21 crc kubenswrapper[4729]: I0127 14:06:21.962289 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:21Z","lastTransitionTime":"2026-01-27T14:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.044794 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:45:40.500350952 +0000 UTC Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.050249 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.050265 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.050281 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.050306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:22 crc kubenswrapper[4729]: E0127 14:06:22.050447 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:22 crc kubenswrapper[4729]: E0127 14:06:22.050697 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:22 crc kubenswrapper[4729]: E0127 14:06:22.050788 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:22 crc kubenswrapper[4729]: E0127 14:06:22.050929 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.064156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.064190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.064201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.064213 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.064222 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.166967 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.167083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.167111 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.167137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.167149 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.269356 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.269405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.269415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.269431 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.269443 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.371321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.371359 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.371369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.371385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.371395 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.473416 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.473455 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.473467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.473481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.473492 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.575818 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.575897 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.575912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.575930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.575940 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.678453 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.678498 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.678513 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.678530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.678539 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.780732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.780786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.780797 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.780814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.780824 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.883027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.883068 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.883078 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.883093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.883105 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.985717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.985770 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.985781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.985799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:22 crc kubenswrapper[4729]: I0127 14:06:22.985813 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:22Z","lastTransitionTime":"2026-01-27T14:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.045941 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:32:54.981356696 +0000 UTC Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.088201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.088255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.088270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.088288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.088299 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.190642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.190677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.190685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.190699 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.190710 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.293698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.293738 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.293747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.293761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.293771 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.396349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.396670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.396766 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.396869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.396993 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.488555 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/0.log" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.488823 4729 generic.go:334] "Generic (PLEG): container finished" podID="c96a4b30-dced-4bf8-8f46-348c1b8972b3" containerID="76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711" exitCode=1 Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.488930 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerDied","Data":"76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.489462 4729 scope.go:117] "RemoveContainer" containerID="76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.499786 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.499823 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.499834 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.499850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.499860 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.501862 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.517644 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.530411 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.541853 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.554927 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.566015 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.576586 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.596794 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.602270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.602308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.602318 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.602334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.602345 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.610583 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.622951 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.637891 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.649807 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.662436 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.672686 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.683836 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.697635 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.704993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.705047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.705055 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.705069 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.705078 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.716221 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.727413 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.739382 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:23Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.807304 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.807349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.807376 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.807392 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.807403 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.909507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.909553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.909565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.909581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:23 crc kubenswrapper[4729]: I0127 14:06:23.909592 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:23Z","lastTransitionTime":"2026-01-27T14:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.011679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.011711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.011725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.011740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.011752 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.046369 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 13:04:58.359545419 +0000 UTC Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.050899 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.050905 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:24 crc kubenswrapper[4729]: E0127 14:06:24.051050 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:24 crc kubenswrapper[4729]: E0127 14:06:24.051120 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.050930 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:24 crc kubenswrapper[4729]: E0127 14:06:24.051214 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.051362 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:24 crc kubenswrapper[4729]: E0127 14:06:24.051440 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.068745 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.081831 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.095154 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.106985 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.114091 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.114129 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.114142 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.114159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.114170 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.120678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.133102 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.143817 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.162963 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.175474 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.186182 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.197918 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.212109 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.216642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.216680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.216694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.216734 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.216750 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.224469 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.234693 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.246224 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.261696 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.283603 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.298493 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.311865 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.319623 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.319661 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.319672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.319688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.319698 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.421235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.421478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.421541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.421634 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.421702 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.495049 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/0.log" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.495123 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerStarted","Data":"3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.513105 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.524215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.524250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.524264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.524281 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.524294 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.536784 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.553002 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.565231 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.575442 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.591443 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.609326 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.625604 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.627075 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.627127 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.627140 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.627162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.627176 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.640694 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.654856 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.667729 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.688846 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.703742 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.716428 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.728004 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.729371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.729407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.729419 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.729436 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.729447 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.742165 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.752330 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.763391 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.774577 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:24Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.831814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.831852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.831865 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.831901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.831915 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.934508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.934546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.934557 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.934572 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:24 crc kubenswrapper[4729]: I0127 14:06:24.934585 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:24Z","lastTransitionTime":"2026-01-27T14:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.037471 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.037510 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.037520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.037536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.037546 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.046761 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:07:06.345020373 +0000 UTC Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.139575 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.139613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.139626 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.139644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.139658 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.242040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.242083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.242092 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.242106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.242115 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.344330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.344384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.344397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.344413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.344426 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.446602 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.446643 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.446653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.446669 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.446680 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.549411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.549456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.549467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.549483 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.549494 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.651360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.651408 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.651422 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.651469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.651483 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.753676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.753706 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.753716 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.753731 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.753741 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.857489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.857531 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.857541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.857556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.857567 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.960057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.960088 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.960096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.960112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:25 crc kubenswrapper[4729]: I0127 14:06:25.960120 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:25Z","lastTransitionTime":"2026-01-27T14:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.046956 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:37:19.382372076 +0000 UTC Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.050432 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.050501 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:26 crc kubenswrapper[4729]: E0127 14:06:26.050574 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.050633 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.050651 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:26 crc kubenswrapper[4729]: E0127 14:06:26.050750 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:26 crc kubenswrapper[4729]: E0127 14:06:26.050820 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:26 crc kubenswrapper[4729]: E0127 14:06:26.050969 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.062004 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.062047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.062057 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.062073 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.062085 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.165078 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.165126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.165136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.165152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.165163 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.267854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.267936 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.267953 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.267972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.267985 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.370589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.370630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.370640 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.370658 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.370671 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.473094 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.473138 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.473150 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.473168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.473181 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.576400 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.576456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.576468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.576489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.576501 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.679299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.679359 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.679370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.679385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.679394 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.782066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.782121 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.782136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.782156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.782172 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.886300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.886343 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.886354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.886369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.886378 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.990261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.990308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.990317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.990330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:26 crc kubenswrapper[4729]: I0127 14:06:26.990340 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:26Z","lastTransitionTime":"2026-01-27T14:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.047447 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:51:12.223129665 +0000 UTC Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.093143 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.093183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.093195 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.093212 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.093225 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.196730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.196816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.196846 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.196927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.196973 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.299262 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.299299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.299311 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.299328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.299377 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.402540 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.402601 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.402614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.402640 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.402658 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.504437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.504485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.504501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.504520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.504532 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.606358 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.606390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.606399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.606413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.606422 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.709399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.709447 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.709461 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.709477 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.709489 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.811791 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.811845 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.811858 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.811896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.811909 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.914568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.914613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.914630 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.914653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:27 crc kubenswrapper[4729]: I0127 14:06:27.914669 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:27Z","lastTransitionTime":"2026-01-27T14:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.017087 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.017139 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.017153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.017171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.017183 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.047608 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:40:24.049068153 +0000 UTC Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.049996 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.050048 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.050017 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.049998 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:28 crc kubenswrapper[4729]: E0127 14:06:28.050168 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:28 crc kubenswrapper[4729]: E0127 14:06:28.050282 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:28 crc kubenswrapper[4729]: E0127 14:06:28.050341 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:28 crc kubenswrapper[4729]: E0127 14:06:28.050408 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.122779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.122828 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.122840 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.122855 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.122929 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.225501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.225546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.225558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.225575 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.225587 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.328854 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.328925 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.328939 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.328956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.328968 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.431027 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.431053 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.431062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.431075 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.431084 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.533530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.533569 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.533582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.533598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.533609 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.636481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.636516 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.636527 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.636541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.636551 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.739390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.739639 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.739717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.740001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.740103 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.842638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.842687 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.842704 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.842725 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.842745 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.945274 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.945306 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.945316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.945331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:28 crc kubenswrapper[4729]: I0127 14:06:28.945342 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:28Z","lastTransitionTime":"2026-01-27T14:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.047224 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.047264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.047275 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.047289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.047299 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.048489 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:53:55.612873577 +0000 UTC Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.150198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.150255 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.150267 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.150290 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.150303 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.253144 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.253184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.253198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.253216 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.253229 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.355691 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.355733 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.355748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.355764 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.355776 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.458860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.458914 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.458924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.458936 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.458947 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.560839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.560899 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.560908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.560924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.560936 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.663485 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.663551 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.663567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.663589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.663604 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.766375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.766421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.766432 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.766444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.766453 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.869152 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.869188 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.869227 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.869241 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.869250 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.971689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.971715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.971724 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.971737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:29 crc kubenswrapper[4729]: I0127 14:06:29.971746 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:29Z","lastTransitionTime":"2026-01-27T14:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.049013 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:05:05.949129335 +0000 UTC Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.050340 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.050396 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.050358 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:30 crc kubenswrapper[4729]: E0127 14:06:30.050483 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:30 crc kubenswrapper[4729]: E0127 14:06:30.050615 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.050690 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:30 crc kubenswrapper[4729]: E0127 14:06:30.050768 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:30 crc kubenswrapper[4729]: E0127 14:06:30.051197 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.074561 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.074643 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.074659 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.074676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.074690 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.177234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.177285 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.177298 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.177316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.177328 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.279946 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.280001 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.280013 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.280030 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.280043 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.383133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.383177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.383185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.383198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.383208 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.485684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.485772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.485815 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.485932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.485990 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.588744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.588785 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.588801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.588818 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.588828 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.691975 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.692024 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.692034 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.692049 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.692057 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.794614 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.794676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.794692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.794714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.794729 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.897063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.897105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.897120 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.897136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.897145 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.999679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.999747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.999768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.999792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:30 crc kubenswrapper[4729]: I0127 14:06:30.999808 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:30Z","lastTransitionTime":"2026-01-27T14:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.049417 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:28:23.654593691 +0000 UTC Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.102196 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.102263 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.102277 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.102299 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.102313 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.204798 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.204843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.204864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.204951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.204965 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.307811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.307915 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.307927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.307950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.307964 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.411028 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.411083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.411099 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.411119 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.411133 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.427321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.427364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.427374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.427390 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.427401 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.442964 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.446437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.446489 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.446502 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.446519 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.446532 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.459410 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.463327 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.463371 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.463380 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.463396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.463407 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.474652 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.477936 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.477991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.478003 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.478026 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.478041 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.491938 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.495553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.495597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.495608 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.495624 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.495636 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.507008 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:31Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:31 crc kubenswrapper[4729]: E0127 14:06:31.507127 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.513714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.513747 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.513758 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.513772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.513786 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.616084 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.616133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.616145 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.616162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.616173 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.719238 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.719293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.719307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.719323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.719334 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.822260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.822289 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.822298 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.822310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.822318 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.925063 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.925122 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.925137 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.925158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:31 crc kubenswrapper[4729]: I0127 14:06:31.925203 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:31Z","lastTransitionTime":"2026-01-27T14:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.027632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.027690 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.027701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.027717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.027729 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.050261 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:10:56.104771927 +0000 UTC Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.050452 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.050488 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.050763 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.050794 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:32 crc kubenswrapper[4729]: E0127 14:06:32.050964 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:32 crc kubenswrapper[4729]: E0127 14:06:32.051025 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.051122 4729 scope.go:117] "RemoveContainer" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" Jan 27 14:06:32 crc kubenswrapper[4729]: E0127 14:06:32.051188 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:32 crc kubenswrapper[4729]: E0127 14:06:32.051229 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.129610 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.129654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.129667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.129685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.129697 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.233456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.233509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.233520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.233545 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.233563 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.337011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.337088 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.337104 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.337128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.337141 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.440558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.440607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.440618 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.440637 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.440654 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.521119 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/2.log" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.523907 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.524557 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.538422 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.543970 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.544012 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.544023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.544042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.544062 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.562486 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.585673 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.601410 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.624491 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.646265 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.646307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.646317 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.646332 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.646344 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.648685 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.666414 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.677370 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.688130 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.700842 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.712217 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.720563 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.732245 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.744670 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.748326 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.748369 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.748382 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.748397 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.748408 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.756105 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.768526 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.790418 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.811548 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.825952 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:32Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.850695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.850744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.850757 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.850772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.850784 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.953294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.953347 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.953356 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.953375 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:32 crc kubenswrapper[4729]: I0127 14:06:32.953385 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:32Z","lastTransitionTime":"2026-01-27T14:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.050368 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 14:03:40.735764804 +0000 UTC Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.056266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.056310 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.056321 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.056338 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.056349 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.158300 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.158329 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.158337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.158351 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.158360 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.260495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.260586 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.260596 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.260609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.260618 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.362869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.363045 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.363061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.363080 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.363090 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.465896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.465940 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.465950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.465965 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.465975 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.529260 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/3.log" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.530128 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/2.log" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.533267 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" exitCode=1 Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.533313 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.533384 4729 scope.go:117] "RemoveContainer" containerID="d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.534040 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:06:33 crc kubenswrapper[4729]: E0127 14:06:33.534496 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.548200 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.560525 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.567833 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.567896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.567908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.567927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.567938 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.571287 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.580620 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.598545 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.612911 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.625678 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.638154 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.653268 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.667579 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.670330 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.670388 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.670407 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.670428 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.670456 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.682038 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.696094 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.711000 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.732643 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:33Z\\\",\\\"message\\\":\\\"62 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 14:06:33.037053 6783 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-khqcl in node crc\\\\nI0127 14:06:33.037057 6783 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 14:06:33.037065 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 14:06:33.036868 6783 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037085 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037096 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0127 14:06:33.037117 6783 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.745947 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.760078 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.772512 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.772570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.772584 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.772600 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.772611 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.775485 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.788938 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.802018 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:33Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.874977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.875016 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.875024 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.875042 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.875052 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.978017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.978071 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.978083 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.978100 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:33 crc kubenswrapper[4729]: I0127 14:06:33.978113 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:33Z","lastTransitionTime":"2026-01-27T14:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.049968 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.050019 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.050033 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:34 crc kubenswrapper[4729]: E0127 14:06:34.050096 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.049968 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:34 crc kubenswrapper[4729]: E0127 14:06:34.050207 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:34 crc kubenswrapper[4729]: E0127 14:06:34.050263 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:34 crc kubenswrapper[4729]: E0127 14:06:34.050303 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.050516 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:39:27.147835284 +0000 UTC Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.065752 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.079132 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.080366 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.080391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.080399 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.080414 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.080424 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.098071 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.111112 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.122786 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.135164 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.147734 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.159839 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.169851 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.180028 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.182517 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.182562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.182581 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.182599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.182610 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.194647 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.206374 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.215939 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.226011 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.238311 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.255207 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6a7097a5243eb28f551857ddc2944bbc6ec450e78849c66aee8b2dce432765c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:02Z\\\",\\\"message\\\":\\\"ner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI0127 14:06:02.241079 6381 services_controller.go:360] Finished syncing service catalog-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 3.858297ms\\\\nI0127 14:06:02.241322 6381 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0127 14:06:02.241350 6381 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0127 14:06:02.241374 6381 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0127 14:06:02.241440 6381 factory.go:1336] Added *v1.Node event handler 7\\\\nI0127 14:06:02.241512 6381 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0127 14:06:02.242010 6381 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0127 14:06:02.242123 6381 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0127 14:06:02.242171 6381 ovnkube.go:599] Stopped ovnkube\\\\nI0127 14:06:02.242206 6381 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 14:06:02.242314 6381 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:33Z\\\",\\\"message\\\":\\\"62 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 14:06:33.037053 6783 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-khqcl in node crc\\\\nI0127 14:06:33.037057 6783 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 14:06:33.037065 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 14:06:33.036868 6783 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037085 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037096 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0127 14:06:33.037117 6783 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.264799 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.274246 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.285628 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.285672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.285680 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.285695 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.285717 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.286232 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.388183 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.388234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.388244 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.388260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.388270 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.490286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.490326 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.490337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.490352 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.490362 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.537838 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/3.log" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.540840 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:06:34 crc kubenswrapper[4729]: E0127 14:06:34.541015 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.550637 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.563782 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.579069 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.593345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.593627 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.593720 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.593810 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.593923 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.620817 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.636946 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.649943 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.661173 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.672370 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.687093 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.696464 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.696509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.696520 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.696536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.696546 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.707335 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:33Z\\\",\\\"message\\\":\\\"62 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 14:06:33.037053 6783 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-khqcl in node crc\\\\nI0127 14:06:33.037057 6783 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 14:06:33.037065 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 14:06:33.036868 6783 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037085 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037096 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0127 14:06:33.037117 6783 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.718965 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.733746 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.745668 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.765535 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.784536 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798636 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798863 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.798959 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.813120 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.825491 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.838365 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:34Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.901211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.901259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.901268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.901282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:34 crc kubenswrapper[4729]: I0127 14:06:34.901293 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:34Z","lastTransitionTime":"2026-01-27T14:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.003838 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.003912 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.003927 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.003944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.003955 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.051433 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:20:37.116779856 +0000 UTC Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.106219 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.106279 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.106292 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.106314 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.106327 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.208235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.208273 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.208283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.208296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.208305 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.310788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.310831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.310843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.310857 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.310867 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.413619 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.413654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.413665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.413684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.413695 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.515744 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.515784 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.515792 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.515806 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.515818 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.619364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.619530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.619554 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.619584 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.619604 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.722228 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.722276 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.722287 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.722305 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.722317 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.824852 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.825006 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.825046 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.825085 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.825107 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.927096 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.927147 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.927158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.927176 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:35 crc kubenswrapper[4729]: I0127 14:06:35.927188 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:35Z","lastTransitionTime":"2026-01-27T14:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.030058 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.030102 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.030112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.030126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.030136 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.050556 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.050632 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.050689 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:36 crc kubenswrapper[4729]: E0127 14:06:36.050688 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.050563 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:36 crc kubenswrapper[4729]: E0127 14:06:36.050796 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:36 crc kubenswrapper[4729]: E0127 14:06:36.050963 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:36 crc kubenswrapper[4729]: E0127 14:06:36.051050 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.051931 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:37:48.147137323 +0000 UTC Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.132994 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.133331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.133340 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.133354 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.133364 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.235509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.235558 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.235570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.235587 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.235600 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.337983 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.338024 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.338035 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.338051 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.338063 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.440313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.440355 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.440364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.440378 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.440387 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.542148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.542184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.542192 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.542205 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.542213 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.644779 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.645508 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.645543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.645563 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.645573 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.748768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.748811 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.748835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.748853 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.748863 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.851739 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.851796 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.851809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.851824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.851833 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.953947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.954021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.954044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.954075 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:36 crc kubenswrapper[4729]: I0127 14:06:36.954096 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:36Z","lastTransitionTime":"2026-01-27T14:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.052749 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:36:13.770836889 +0000 UTC Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.057250 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.057282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.057291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.057372 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.057393 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.160259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.160307 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.160316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.160340 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.160358 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.263848 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.263958 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.263977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.264003 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.264020 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.366478 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.366580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.366597 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.366677 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.366713 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.470325 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.470385 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.470409 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.470429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.470443 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.573425 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.573467 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.573488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.573503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.573514 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.676291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.676360 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.676381 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.676405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.676424 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.779200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.779302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.779322 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.779346 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.779360 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.883118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.883191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.883204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.883227 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.883242 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.986843 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.986917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.986930 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.986950 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:37 crc kubenswrapper[4729]: I0127 14:06:37.986961 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:37Z","lastTransitionTime":"2026-01-27T14:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.015193 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.015500 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.015445759 +0000 UTC m=+148.599636773 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.015608 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.015701 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.015759 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.015814 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.015929 4729 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.015954 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016006 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016025 4729 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016021 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.016000457 +0000 UTC m=+148.600191461 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016053 4729 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016102 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.016077159 +0000 UTC m=+148.600268363 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016084 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016200 4729 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016223 4729 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016204 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.016168192 +0000 UTC m=+148.600359396 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.016333 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.016294706 +0000 UTC m=+148.600485710 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.050397 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.050502 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.050548 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.050733 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.050738 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.050929 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.051220 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:38 crc kubenswrapper[4729]: E0127 14:06:38.051333 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.052864 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:13:21.624287764 +0000 UTC Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.089936 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.089980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.089990 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.090007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.090017 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.192928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.192961 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.192969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.192984 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.192994 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.295976 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.296031 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.296044 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.296062 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.296268 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.398694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.398755 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.398767 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.398783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.398795 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.501652 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.501711 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.501726 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.501760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.501773 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.604632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.604689 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.604717 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.604741 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.604756 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.706702 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.706736 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.706745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.706759 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.706768 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.809451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.809501 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.809513 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.809530 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.809543 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.911932 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.911971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.911982 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.911999 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:38 crc kubenswrapper[4729]: I0127 14:06:38.912011 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:38Z","lastTransitionTime":"2026-01-27T14:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.014599 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.014653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.014664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.014681 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.014694 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.054010 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:38:33.649652643 +0000 UTC Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.116836 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.116916 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.116928 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.116948 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.116962 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.221193 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.221258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.221269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.221286 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.221298 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.324125 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.324459 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.324472 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.324490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.324503 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.427165 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.427297 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.427331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.427370 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.427431 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.530679 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.530727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.530737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.530753 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.530763 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.633072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.633136 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.633153 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.633179 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.633197 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.735529 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.735644 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.735664 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.735682 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.735701 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.838098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.838148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.838162 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.838181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.838195 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.939944 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.940007 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.940021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.940040 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:39 crc kubenswrapper[4729]: I0127 14:06:39.940052 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:39Z","lastTransitionTime":"2026-01-27T14:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.043010 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.043061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.043072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.043091 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.043103 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.050511 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.050517 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.050581 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:40 crc kubenswrapper[4729]: E0127 14:06:40.050603 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:40 crc kubenswrapper[4729]: E0127 14:06:40.050754 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.050785 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:40 crc kubenswrapper[4729]: E0127 14:06:40.050991 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:40 crc kubenswrapper[4729]: E0127 14:06:40.051060 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.054280 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:00:27.452675053 +0000 UTC Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.144903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.144946 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.144957 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.144971 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.144982 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.248111 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.248148 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.248156 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.248170 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.248179 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.350394 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.350451 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.350462 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.350481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.350493 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.452413 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.452448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.452456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.452469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.452477 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.555665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.555715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.555727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.555748 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.555760 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.658178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.658242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.658259 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.658282 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.658296 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.760847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.760908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.760920 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.760935 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.760945 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.863901 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.863947 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.863956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.863969 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.863978 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.965963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.966011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.966021 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.966037 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:40 crc kubenswrapper[4729]: I0127 14:06:40.966047 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:40Z","lastTransitionTime":"2026-01-27T14:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.054429 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:09:35.827170698 +0000 UTC Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.068207 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.068257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.068269 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.068293 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.068305 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.170128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.170159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.170168 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.170180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.170188 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.272089 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.272171 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.272204 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.272235 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.272257 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.374978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.375023 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.375038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.375060 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.375075 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.478469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.478533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.478547 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.478566 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.478592 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.580368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.580405 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.580415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.580429 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.580439 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.683175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.683215 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.683226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.683240 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.683250 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.703038 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.703314 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.703412 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.703525 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.703616 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.720330 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.724991 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.725033 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.725043 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.725059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.725069 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.739138 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.743258 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.743288 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.743296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.743312 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.743321 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.756167 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.759243 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.759283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.759292 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.759308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.759318 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.771930 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.776047 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.776093 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.776105 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.776123 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.776134 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.790166 4729 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e56ec573-67b5-4644-a470-a69acd2c4e85\\\",\\\"systemUUID\\\":\\\"854545c8-b5ae-49d8-92cd-d4d0ecee101e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:41Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:41 crc kubenswrapper[4729]: E0127 14:06:41.790331 4729 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.792611 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.792642 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.792653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.792668 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.792677 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.894536 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.894580 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.894591 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.894607 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.894618 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.996851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.996896 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.996908 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.996924 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:41 crc kubenswrapper[4729]: I0127 14:06:41.996937 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:41Z","lastTransitionTime":"2026-01-27T14:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.050543 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.050599 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.050635 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.050570 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:42 crc kubenswrapper[4729]: E0127 14:06:42.050709 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:42 crc kubenswrapper[4729]: E0127 14:06:42.050812 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:42 crc kubenswrapper[4729]: E0127 14:06:42.050919 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:42 crc kubenswrapper[4729]: E0127 14:06:42.051013 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.054647 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:54:05.714901712 +0000 UTC Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.099178 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.099234 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.099251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.099270 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.099281 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.202034 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.202084 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.202098 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.202118 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.202130 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.304496 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.304535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.304548 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.304562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.304572 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.406867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.406922 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.406939 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.406956 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.406966 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.509809 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.509842 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.509850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.509864 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.509892 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.612418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.612468 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.612479 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.612496 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.612507 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.714482 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.714538 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.714550 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.714565 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.714577 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.817326 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.817396 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.817421 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.817454 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.817478 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.921261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.921331 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.921350 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.921374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:42 crc kubenswrapper[4729]: I0127 14:06:42.921392 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:42Z","lastTransitionTime":"2026-01-27T14:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.024280 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.024334 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.024345 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.024364 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.024377 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.055777 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:47:23.559051254 +0000 UTC Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.127109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.127174 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.127185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.127201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.127457 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.229801 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.229851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.229907 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.229923 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.229932 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.333201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.333251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.333261 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.333279 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.333296 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.436114 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.436177 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.436191 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.436230 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.436243 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.538774 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.538816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.538826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.538844 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.538856 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.640655 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.640692 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.640701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.640714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.640723 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.743903 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.743972 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.743987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.744011 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.744028 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.847146 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.847190 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.847201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.847219 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.847230 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.950082 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.950134 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.950147 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.950165 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:43 crc kubenswrapper[4729]: I0127 14:06:43.950180 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:43Z","lastTransitionTime":"2026-01-27T14:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.050306 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.050371 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.050404 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:44 crc kubenswrapper[4729]: E0127 14:06:44.050522 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.050565 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:44 crc kubenswrapper[4729]: E0127 14:06:44.050624 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:44 crc kubenswrapper[4729]: E0127 14:06:44.050714 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:44 crc kubenswrapper[4729]: E0127 14:06:44.050806 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.052444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.052517 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.052529 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.052568 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.052583 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.056055 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:29:30.729941167 +0000 UTC Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.067967 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.083712 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.097070 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-thlc7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c06c7af2-5a87-49e1-82ce-84aa16280c72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mrz4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:48Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-thlc7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.124554 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f72ae340-e519-4f07-a7a7-048ab8664106\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38725098396e04414a0c7ce52a78c6cf5607455e7651a13b28b751400e5562a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://18cea865a28325adca18d48815680ba2ceb84ab2a0bce6e3db14436c00bfbd54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3db67145b70dafa5c1e45d7ae8cf7999561044b97044abdf60dd8a2a73d423c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://836e28be6828b27ccbd14bb2c87ff9753728de49340022eb8401bdcf5a5b92b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0737e540ae60aab284e70467323c45ab3a4fa6b800a722634679f4fd1976d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ad18395cdec45eade07d38913c372844fb59cb1d9a1d9aeee5979ac1760a050\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e90d327b6e68552aaa0203ac015833ff96d4027d822afc603ef9a6e9165fe88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4855023543f9aed411f91100730a65a8a0a154ff6bc6b29b075b1f38517ee4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.139109 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12a89154-c512-4f7a-bec3-f2f415009cb0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 14:05:34.816418 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 14:05:34.816576 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 14:05:34.817925 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2365610703/tls.crt::/tmp/serving-cert-2365610703/tls.key\\\\\\\"\\\\nI0127 14:05:35.036249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 14:05:35.041096 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 14:05:35.041115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 14:05:35.041134 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 14:05:35.041139 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 14:05:35.047665 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0127 14:05:35.047689 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0127 14:05:35.047718 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047728 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 14:05:35.047734 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 14:05:35.047752 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 14:05:35.047756 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 14:05:35.047763 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0127 14:05:35.049410 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.151236 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18776c9e-249b-4cec-8759-4b511e928af1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://902ee007476c6fd857fcf15dd89d4cf550cdc29a31e34b718113bb447b12814a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d7cc8cbad9072d50cf2efae22826c24213f3dc393dc111f78444a3eb1865bbf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://526ce28f7a727bc300dca7a904b92922269f372680f8ee150579883706c8839a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.154955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.155242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.155349 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.155497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.155624 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.161798 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8kktz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea7fb0a-4048-41bf-ac80-6a80a1f5fb92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90af9e217f4a409095ce321176e87a32ceb3f6c8ac49963fba5ca733885743e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6jkc6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8kktz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.175577 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-ld6q8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c96a4b30-dced-4bf8-8f46-348c1b8972b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:22Z\\\",\\\"message\\\":\\\"2026-01-27T14:05:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c\\\\n2026-01-27T14:05:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cea5fed3-2691-477c-9bac-7b8df4fba99c to /host/opt/cni/bin/\\\\n2026-01-27T14:05:37Z [verbose] multus-daemon started\\\\n2026-01-27T14:05:37Z [verbose] Readiness Indicator file check\\\\n2026-01-27T14:06:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:06:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fwgk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ld6q8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.186831 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8919c7c3-b36c-4bf1-8aed-355b818721a4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://622d00493ae9b44235b176c4f29d90ef8e891db0d49682d32dea3affed697c40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5qbl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-khqcl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.201410 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9l9wv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d187af1-26d7-49c1-b74a-cd8dab606cd7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35f692fbf8d2e599b27d520d861af3805a4c22c9f3e20abaa0d982ba4428499d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h78cw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9l9wv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.211802 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82601bab-859c-4885-9f16-d2c39bdd86e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f8c837bfa552b9af9a167d3367d72568f56cda6db7648f39453c1d49fb27059\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://022ddb968c682b2f439cd71fbb537aed65103d829e67f8e01f58a73a5285b957\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.223147 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aec3691d4960f62560372c1b91b5a2e2d91ef752d9a2b19d00c7501b1539752\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc552c3832dd05a5d1604f6d38602f8f30bdb2be394632872ea6858ae2c5ce4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.236144 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.247369 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36ed3984-2bfa-44db-8ef3-985fc2abbeb0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8b5dc4c4667dc143b03b698fab55826281245d8e32694fd3e474eced46c8da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f8f1844406a598e2883ca129a21b092e8a4157fa4b1e8b030b74a714f53fcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pqs5n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tb4tr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.257625 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.257658 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.257670 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.257685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.257698 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.259011 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2414239a-5640-4cde-8b89-bf92afbea867\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d816ac7523dd864247bdaf3bab6a2f7eb1bca5759da54de10801b60e44a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a056ab21aeb91a82b9bd6d1e23459e8efafac870a893452c52998494e2421f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b79bff5cd5753aef532e4fa8ee90b69b775711cc4b70f772b43fed7ecb795f0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4da52425702fc432bb3c953323d980948359654091e563de8bc8b8e449521f74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.271633 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08165218-cb0f-4830-a709-1ebad64bb005\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ca34c92486a81909f26ec37c47e1d7cd1e0b059457516e523c3f18a0f3e30b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://743a10b70191d523eadeec3d50f5877f451730aa1a12fcdf4090809363e987f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c375ff7b09b98cafdfef8ff1cbc536b3c1dfdfa789b277651f86f3496287b28c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0b7135f2aef8466f2ebc8462ff7f1d6b9eead69fb2573a963a1f39b23044483\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09535236d75a622239efcd5e2fffd84b7ef806d79ff6df3f929de80bd8623d0a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818fc3d19cccb768c67a1c1906eca4e8e300898ed57bc8c47fed2f1ab35701ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ed5bfbb592e417ce15b8ec9d8da68374ecce98122bb833e7ba2def4e8461b88\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-65xsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hgr4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.288499 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e351d0ac-c092-4226-84d2-dbcea45c1ec0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T14:06:33Z\\\",\\\"message\\\":\\\"62 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0127 14:06:33.037053 6783 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-khqcl in node crc\\\\nI0127 14:06:33.037057 6783 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0127 14:06:33.037065 6783 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0127 14:06:33.036868 6783 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037085 6783 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0127 14:06:33.037096 6783 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nF0127 14:06:33.037117 6783 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not ad\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T14:06:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T14:05:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v24tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T14:05:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9l5t6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.300181 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcdfad505d3a29363eb409342713959711db3d55bf498b59a9b87e49f5f2d755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.309980 4729 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T14:05:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03ba81fd05f958acba7c3dc2e862a36d0e3e7ecb33f2b74f913fc51e999c3c55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T14:05:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T14:06:44Z is after 2025-08-24T17:21:41Z" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.359978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.360059 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.360072 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.360089 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.360123 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.462490 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.462533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.462737 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.462752 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.462768 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.565367 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.565402 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.565411 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.565424 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.565432 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.667869 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.668064 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.668077 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.668106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.668121 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.771506 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.771576 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.771590 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.771608 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.771629 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.874379 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.874439 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.874448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.874463 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.874474 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.976456 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.976497 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.976509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.976524 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:44 crc kubenswrapper[4729]: I0127 14:06:44.976535 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:44Z","lastTransitionTime":"2026-01-27T14:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.051746 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:06:45 crc kubenswrapper[4729]: E0127 14:06:45.051988 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.057126 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 04:01:14.744924581 +0000 UTC Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.079671 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.079719 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.079730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.079745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.079755 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.182769 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.182830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.182847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.182871 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.182916 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.285445 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.285504 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.285515 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.285535 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.285544 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.388116 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.388159 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.388167 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.388181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.388190 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.490756 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.490808 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.490860 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.490917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.490936 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.592889 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.592946 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.592955 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.592987 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.592998 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.695740 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.695799 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.695812 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.695831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.695844 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.798466 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.798513 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.798523 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.798543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.798554 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.901181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.901233 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.901246 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.901264 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:45 crc kubenswrapper[4729]: I0127 14:06:45.901276 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:45Z","lastTransitionTime":"2026-01-27T14:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.003448 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.003552 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.003609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.003636 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.003654 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.050190 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.050242 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.050264 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.050476 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:46 crc kubenswrapper[4729]: E0127 14:06:46.050635 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:46 crc kubenswrapper[4729]: E0127 14:06:46.050729 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:46 crc kubenswrapper[4729]: E0127 14:06:46.050849 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:46 crc kubenswrapper[4729]: E0127 14:06:46.050994 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.058085 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:37:39.169896062 +0000 UTC Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.106257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.106296 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.106308 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.106323 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.106334 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.208911 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.208951 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.208963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.208978 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.208990 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.311654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.311696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.311704 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.311718 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.311728 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.414423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.414464 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.414473 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.414487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.414498 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.517251 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.517292 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.517302 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.517316 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.517327 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.619732 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.619798 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.619814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.619831 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.619842 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.722158 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.722201 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.722211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.722226 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.722239 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.824820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.824867 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.824904 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.824919 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.824928 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.927469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.927509 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.927517 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.927533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:46 crc kubenswrapper[4729]: I0127 14:06:46.927544 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:46Z","lastTransitionTime":"2026-01-27T14:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.030487 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.030533 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.030541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.030556 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.030564 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.059100 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:17:26.156335303 +0000 UTC Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.133198 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.133244 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.133257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.133272 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.133282 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.236685 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.236742 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.236759 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.236781 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.236794 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.340061 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.340184 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.340211 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.340256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.340294 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.443998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.444066 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.444103 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.444132 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.444153 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.546384 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.546427 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.546437 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.546450 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.546460 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.648921 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.648966 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.648977 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.648993 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.649004 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.751621 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.751674 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.751683 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.751698 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.751708 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.854574 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.854627 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.854638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.854654 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.854664 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.957547 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.957593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.957602 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.957617 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:47 crc kubenswrapper[4729]: I0127 14:06:47.957629 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:47Z","lastTransitionTime":"2026-01-27T14:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.050523 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.050734 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.050986 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:48 crc kubenswrapper[4729]: E0127 14:06:48.050988 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.051041 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:48 crc kubenswrapper[4729]: E0127 14:06:48.051116 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:48 crc kubenswrapper[4729]: E0127 14:06:48.051409 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:48 crc kubenswrapper[4729]: E0127 14:06:48.051505 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.059645 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:46:15.643863238 +0000 UTC Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.060633 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.060701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.060715 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.060727 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.060740 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.164295 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.164337 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.164348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.164368 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.164380 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.267340 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.267381 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.267391 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.267406 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.267417 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.370751 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.370807 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.370816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.370835 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.370845 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.473782 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.474503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.474730 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.474973 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.475184 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.578446 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.578507 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.578541 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.578575 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.578592 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.681543 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.681616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.681641 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.681672 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.681698 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.783546 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.783598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.783613 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.783631 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.783644 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.886582 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.886627 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.886638 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.886653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.886663 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.988851 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.988980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.989004 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.989036 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:48 crc kubenswrapper[4729]: I0127 14:06:48.989095 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:48Z","lastTransitionTime":"2026-01-27T14:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.059800 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:36:26.715656192 +0000 UTC Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.092257 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.092313 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.092328 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.092348 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.092363 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.194200 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.194243 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.194268 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.194294 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.194311 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.297374 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.297415 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.297428 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.297444 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.297455 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.399495 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.399567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.399577 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.399592 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.399607 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.501423 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.501476 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.501488 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.501503 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.501513 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.604109 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.604155 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.604166 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.604181 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.604193 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.706816 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.706918 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.706938 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.706962 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.706977 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.809126 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.809175 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.809185 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.809199 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.809210 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.911943 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.911980 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.911998 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.912017 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:49 crc kubenswrapper[4729]: I0127 14:06:49.912029 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:49Z","lastTransitionTime":"2026-01-27T14:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.014418 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.014469 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.014481 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.014498 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.014512 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.050014 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.050078 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.050084 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:50 crc kubenswrapper[4729]: E0127 14:06:50.050145 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.050302 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:50 crc kubenswrapper[4729]: E0127 14:06:50.050332 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:50 crc kubenswrapper[4729]: E0127 14:06:50.050370 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:50 crc kubenswrapper[4729]: E0127 14:06:50.050450 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.059938 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:05:30.888094835 +0000 UTC Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.116494 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.116553 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.116570 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.116593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.116611 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.220073 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.220133 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.220151 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.220173 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.220192 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.324026 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.324106 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.324128 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.324157 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.324175 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.426593 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.427256 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.427283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.427301 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.427313 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.529186 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.529254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.529266 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.529283 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.529295 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.632112 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.632209 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.632227 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.632254 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.632272 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.734696 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.734745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.734761 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.734780 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.734792 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.836917 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.836963 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.836979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.837000 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.837010 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.938694 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.938824 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.938837 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.938850 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:50 crc kubenswrapper[4729]: I0127 14:06:50.938858 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:50Z","lastTransitionTime":"2026-01-27T14:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.041703 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.041760 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.041771 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.041788 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.041798 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.060444 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:54:37.024288236 +0000 UTC Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.144589 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.144665 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.144688 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.144714 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.144732 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.247616 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.247656 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.247667 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.247684 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.247696 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.350182 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.350229 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.350242 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.350260 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.350274 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.453180 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.453244 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.453265 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.453291 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.453308 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.556749 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.556804 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.556820 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.556847 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.556863 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.659562 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.659598 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.659609 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.659626 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.659637 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.762778 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.762830 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.762841 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.762856 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.762870 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.865651 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.865745 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.865762 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.865783 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.865800 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.968701 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.968772 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.968794 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.968821 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:51 crc kubenswrapper[4729]: I0127 14:06:51.968837 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:51Z","lastTransitionTime":"2026-01-27T14:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.050688 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.050764 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.050827 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.050991 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.051025 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.051102 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.051275 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.051363 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.061510 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:55:01.172819288 +0000 UTC Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.071768 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.071814 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.071826 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.071839 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.071850 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:52Z","lastTransitionTime":"2026-01-27T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.119567 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.119632 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.119653 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.119676 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.119695 4729 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T14:06:52Z","lastTransitionTime":"2026-01-27T14:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.184425 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz"] Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.184845 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.187414 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.187566 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.188233 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.188303 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.206388 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.206366364 podStartE2EDuration="44.206366364s" podCreationTimestamp="2026-01-27 14:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.206361404 +0000 UTC m=+98.790552468" watchObservedRunningTime="2026-01-27 14:06:52.206366364 +0000 UTC m=+98.790557388" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.229854 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hgr4r" podStartSLOduration=78.229825384 podStartE2EDuration="1m18.229825384s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.225193583 +0000 UTC m=+98.809384597" watchObservedRunningTime="2026-01-27 14:06:52.229825384 +0000 UTC m=+98.814016428" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.259256 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.259333 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.259569 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c23032f-7897-47f9-9a8e-22427e11ebf4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.259655 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c23032f-7897-47f9-9a8e-22427e11ebf4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.259698 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c23032f-7897-47f9-9a8e-22427e11ebf4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.290116 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tb4tr" podStartSLOduration=77.290083444 podStartE2EDuration="1m17.290083444s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.270181269 +0000 UTC m=+98.854372313" watchObservedRunningTime="2026-01-27 14:06:52.290083444 +0000 UTC m=+98.874274498" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.324776 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.324759756 podStartE2EDuration="1m16.324759756s" podCreationTimestamp="2026-01-27 14:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.323903549 +0000 UTC m=+98.908094563" watchObservedRunningTime="2026-01-27 14:06:52.324759756 +0000 UTC m=+98.908950760" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.342855 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.342841511 podStartE2EDuration="1m17.342841511s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.342692926 +0000 UTC m=+98.926883950" watchObservedRunningTime="2026-01-27 14:06:52.342841511 +0000 UTC m=+98.927032515" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.355912 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.355895184 podStartE2EDuration="1m13.355895184s" podCreationTimestamp="2026-01-27 14:05:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.354611503 +0000 UTC m=+98.938802527" watchObservedRunningTime="2026-01-27 14:06:52.355895184 +0000 UTC m=+98.940086198" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360248 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c23032f-7897-47f9-9a8e-22427e11ebf4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360305 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360340 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360412 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c23032f-7897-47f9-9a8e-22427e11ebf4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360442 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c23032f-7897-47f9-9a8e-22427e11ebf4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.360531 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6c23032f-7897-47f9-9a8e-22427e11ebf4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.361132 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6c23032f-7897-47f9-9a8e-22427e11ebf4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.367220 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6c23032f-7897-47f9-9a8e-22427e11ebf4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.379848 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c23032f-7897-47f9-9a8e-22427e11ebf4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2vdz\" (UID: \"6c23032f-7897-47f9-9a8e-22427e11ebf4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.434546 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.434529589 podStartE2EDuration="31.434529589s" podCreationTimestamp="2026-01-27 14:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.421968583 +0000 UTC m=+99.006159607" watchObservedRunningTime="2026-01-27 14:06:52.434529589 +0000 UTC m=+99.018720613" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.454013 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8kktz" podStartSLOduration=79.45399775 podStartE2EDuration="1m19.45399775s" podCreationTimestamp="2026-01-27 14:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.453913007 +0000 UTC m=+99.038104031" watchObservedRunningTime="2026-01-27 14:06:52.45399775 +0000 UTC m=+99.038188844" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.464659 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ld6q8" podStartSLOduration=78.464641514 podStartE2EDuration="1m18.464641514s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.464415847 +0000 UTC m=+99.048606861" watchObservedRunningTime="2026-01-27 14:06:52.464641514 +0000 UTC m=+99.048832528" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.477713 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podStartSLOduration=78.477696647 podStartE2EDuration="1m18.477696647s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.47718271 +0000 UTC m=+99.061373714" watchObservedRunningTime="2026-01-27 14:06:52.477696647 +0000 UTC m=+99.061887651" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.488827 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9l9wv" podStartSLOduration=78.488813697 podStartE2EDuration="1m18.488813697s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.488266659 +0000 UTC m=+99.072457653" watchObservedRunningTime="2026-01-27 14:06:52.488813697 +0000 UTC m=+99.073004701" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.501440 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.599257 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" event={"ID":"6c23032f-7897-47f9-9a8e-22427e11ebf4","Type":"ContainerStarted","Data":"e76c19e742349578218dbe24005367dd17eb43de9c4547d23c0fd022fad2db95"} Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.599303 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" event={"ID":"6c23032f-7897-47f9-9a8e-22427e11ebf4","Type":"ContainerStarted","Data":"464ee22a893d12a53abe83938ef97194bc0a7330a94261995fede98c25b2cab2"} Jan 27 14:06:52 crc kubenswrapper[4729]: I0127 14:06:52.662379 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.662478 4729 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:52 crc kubenswrapper[4729]: E0127 14:06:52.662571 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs podName:c06c7af2-5a87-49e1-82ce-84aa16280c72 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:56.66254857 +0000 UTC m=+163.246739574 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs") pod "network-metrics-daemon-thlc7" (UID: "c06c7af2-5a87-49e1-82ce-84aa16280c72") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 14:06:53 crc kubenswrapper[4729]: I0127 14:06:53.061686 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:32:21.77744527 +0000 UTC Jan 27 14:06:53 crc kubenswrapper[4729]: I0127 14:06:53.062092 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 14:06:53 crc kubenswrapper[4729]: I0127 14:06:53.069347 4729 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 14:06:54 crc kubenswrapper[4729]: I0127 14:06:54.050430 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:54 crc kubenswrapper[4729]: I0127 14:06:54.050575 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:54 crc kubenswrapper[4729]: E0127 14:06:54.052042 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:54 crc kubenswrapper[4729]: I0127 14:06:54.052107 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:54 crc kubenswrapper[4729]: I0127 14:06:54.052209 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:54 crc kubenswrapper[4729]: E0127 14:06:54.052276 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:54 crc kubenswrapper[4729]: E0127 14:06:54.052396 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:54 crc kubenswrapper[4729]: E0127 14:06:54.052506 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:56 crc kubenswrapper[4729]: I0127 14:06:56.050696 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:56 crc kubenswrapper[4729]: I0127 14:06:56.050777 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:56 crc kubenswrapper[4729]: I0127 14:06:56.050782 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:56 crc kubenswrapper[4729]: E0127 14:06:56.050851 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:56 crc kubenswrapper[4729]: E0127 14:06:56.050983 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:56 crc kubenswrapper[4729]: I0127 14:06:56.051070 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:56 crc kubenswrapper[4729]: E0127 14:06:56.051103 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:56 crc kubenswrapper[4729]: E0127 14:06:56.051303 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:58 crc kubenswrapper[4729]: I0127 14:06:58.050242 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:06:58 crc kubenswrapper[4729]: I0127 14:06:58.050342 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:06:58 crc kubenswrapper[4729]: I0127 14:06:58.050370 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:06:58 crc kubenswrapper[4729]: E0127 14:06:58.050428 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:06:58 crc kubenswrapper[4729]: I0127 14:06:58.050560 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:06:58 crc kubenswrapper[4729]: E0127 14:06:58.050576 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:06:58 crc kubenswrapper[4729]: E0127 14:06:58.050742 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:06:58 crc kubenswrapper[4729]: E0127 14:06:58.050837 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:06:59 crc kubenswrapper[4729]: I0127 14:06:59.051753 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:06:59 crc kubenswrapper[4729]: E0127 14:06:59.052104 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:07:00 crc kubenswrapper[4729]: I0127 14:07:00.050792 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:00 crc kubenswrapper[4729]: I0127 14:07:00.050793 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:00 crc kubenswrapper[4729]: I0127 14:07:00.051414 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:00 crc kubenswrapper[4729]: E0127 14:07:00.051947 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:00 crc kubenswrapper[4729]: E0127 14:07:00.052123 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:00 crc kubenswrapper[4729]: I0127 14:07:00.052292 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:00 crc kubenswrapper[4729]: E0127 14:07:00.052305 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:00 crc kubenswrapper[4729]: E0127 14:07:00.052734 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:02 crc kubenswrapper[4729]: I0127 14:07:02.050546 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:02 crc kubenswrapper[4729]: I0127 14:07:02.050632 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:02 crc kubenswrapper[4729]: I0127 14:07:02.050681 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:02 crc kubenswrapper[4729]: I0127 14:07:02.050557 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:02 crc kubenswrapper[4729]: E0127 14:07:02.050752 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:02 crc kubenswrapper[4729]: E0127 14:07:02.050940 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:02 crc kubenswrapper[4729]: E0127 14:07:02.051033 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:02 crc kubenswrapper[4729]: E0127 14:07:02.051118 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:04 crc kubenswrapper[4729]: I0127 14:07:04.050747 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:04 crc kubenswrapper[4729]: I0127 14:07:04.050799 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:04 crc kubenswrapper[4729]: I0127 14:07:04.050832 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:04 crc kubenswrapper[4729]: I0127 14:07:04.052255 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:04 crc kubenswrapper[4729]: E0127 14:07:04.052212 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:04 crc kubenswrapper[4729]: E0127 14:07:04.052391 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:04 crc kubenswrapper[4729]: E0127 14:07:04.052317 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:04 crc kubenswrapper[4729]: E0127 14:07:04.052646 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:06 crc kubenswrapper[4729]: I0127 14:07:06.049891 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:06 crc kubenswrapper[4729]: I0127 14:07:06.050035 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:06 crc kubenswrapper[4729]: E0127 14:07:06.050127 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:06 crc kubenswrapper[4729]: I0127 14:07:06.050353 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:06 crc kubenswrapper[4729]: I0127 14:07:06.050378 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:06 crc kubenswrapper[4729]: E0127 14:07:06.050437 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:06 crc kubenswrapper[4729]: E0127 14:07:06.050781 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:06 crc kubenswrapper[4729]: E0127 14:07:06.051013 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:08 crc kubenswrapper[4729]: I0127 14:07:08.049952 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:08 crc kubenswrapper[4729]: I0127 14:07:08.050020 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:08 crc kubenswrapper[4729]: I0127 14:07:08.050569 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:08 crc kubenswrapper[4729]: I0127 14:07:08.050743 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:08 crc kubenswrapper[4729]: E0127 14:07:08.050853 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:08 crc kubenswrapper[4729]: E0127 14:07:08.050957 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:08 crc kubenswrapper[4729]: E0127 14:07:08.051326 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:08 crc kubenswrapper[4729]: E0127 14:07:08.051380 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.657410 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/1.log" Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.658028 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/0.log" Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.658109 4729 generic.go:334] "Generic (PLEG): container finished" podID="c96a4b30-dced-4bf8-8f46-348c1b8972b3" containerID="3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be" exitCode=1 Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.658147 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerDied","Data":"3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be"} Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.658185 4729 scope.go:117] "RemoveContainer" containerID="76a143ac096c66deebcddf2ac64d13109e721c186c6ca6ff472df80b6bc5d711" Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.659340 4729 scope.go:117] "RemoveContainer" containerID="3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be" Jan 27 14:07:09 crc kubenswrapper[4729]: E0127 14:07:09.659563 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-ld6q8_openshift-multus(c96a4b30-dced-4bf8-8f46-348c1b8972b3)\"" pod="openshift-multus/multus-ld6q8" podUID="c96a4b30-dced-4bf8-8f46-348c1b8972b3" Jan 27 14:07:09 crc kubenswrapper[4729]: I0127 14:07:09.677239 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2vdz" podStartSLOduration=95.677224238 podStartE2EDuration="1m35.677224238s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:06:52.611766706 +0000 UTC m=+99.195957730" watchObservedRunningTime="2026-01-27 14:07:09.677224238 +0000 UTC m=+116.261415242" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.049926 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:10 crc kubenswrapper[4729]: E0127 14:07:10.050038 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.050330 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.050389 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.050390 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:10 crc kubenswrapper[4729]: E0127 14:07:10.050461 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:10 crc kubenswrapper[4729]: E0127 14:07:10.050575 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:10 crc kubenswrapper[4729]: E0127 14:07:10.050673 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.050787 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:07:10 crc kubenswrapper[4729]: E0127 14:07:10.051007 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-9l5t6_openshift-ovn-kubernetes(e351d0ac-c092-4226-84d2-dbcea45c1ec0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" Jan 27 14:07:10 crc kubenswrapper[4729]: I0127 14:07:10.663429 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/1.log" Jan 27 14:07:12 crc kubenswrapper[4729]: I0127 14:07:12.050241 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:12 crc kubenswrapper[4729]: I0127 14:07:12.050289 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:12 crc kubenswrapper[4729]: E0127 14:07:12.050439 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:12 crc kubenswrapper[4729]: I0127 14:07:12.050543 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:12 crc kubenswrapper[4729]: E0127 14:07:12.051348 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:12 crc kubenswrapper[4729]: I0127 14:07:12.051652 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:12 crc kubenswrapper[4729]: E0127 14:07:12.051864 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:12 crc kubenswrapper[4729]: E0127 14:07:12.052353 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:14 crc kubenswrapper[4729]: I0127 14:07:14.049766 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:14 crc kubenswrapper[4729]: I0127 14:07:14.049811 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:14 crc kubenswrapper[4729]: I0127 14:07:14.049858 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.050901 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:14 crc kubenswrapper[4729]: I0127 14:07:14.050939 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.051049 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.051119 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.051218 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.072658 4729 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 14:07:14 crc kubenswrapper[4729]: E0127 14:07:14.195364 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:07:16 crc kubenswrapper[4729]: I0127 14:07:16.050868 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:16 crc kubenswrapper[4729]: I0127 14:07:16.051010 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:16 crc kubenswrapper[4729]: E0127 14:07:16.051156 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:16 crc kubenswrapper[4729]: I0127 14:07:16.051218 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:16 crc kubenswrapper[4729]: E0127 14:07:16.051322 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:16 crc kubenswrapper[4729]: E0127 14:07:16.051366 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:16 crc kubenswrapper[4729]: I0127 14:07:16.051586 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:16 crc kubenswrapper[4729]: E0127 14:07:16.051642 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:18 crc kubenswrapper[4729]: I0127 14:07:18.049774 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:18 crc kubenswrapper[4729]: I0127 14:07:18.049839 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:18 crc kubenswrapper[4729]: I0127 14:07:18.049855 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:18 crc kubenswrapper[4729]: I0127 14:07:18.049802 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:18 crc kubenswrapper[4729]: E0127 14:07:18.049950 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:18 crc kubenswrapper[4729]: E0127 14:07:18.050050 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:18 crc kubenswrapper[4729]: E0127 14:07:18.050141 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:18 crc kubenswrapper[4729]: E0127 14:07:18.050378 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:19 crc kubenswrapper[4729]: E0127 14:07:19.197170 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:07:20 crc kubenswrapper[4729]: I0127 14:07:20.050327 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:20 crc kubenswrapper[4729]: I0127 14:07:20.050391 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:20 crc kubenswrapper[4729]: I0127 14:07:20.050439 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:20 crc kubenswrapper[4729]: I0127 14:07:20.050497 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:20 crc kubenswrapper[4729]: E0127 14:07:20.050710 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:20 crc kubenswrapper[4729]: E0127 14:07:20.051401 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:20 crc kubenswrapper[4729]: E0127 14:07:20.051584 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:20 crc kubenswrapper[4729]: E0127 14:07:20.051671 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.051431 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.696896 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/3.log" Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.699310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerStarted","Data":"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8"} Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.699786 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.726506 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podStartSLOduration=106.726490288 podStartE2EDuration="1m46.726490288s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:21.724948028 +0000 UTC m=+128.309139052" watchObservedRunningTime="2026-01-27 14:07:21.726490288 +0000 UTC m=+128.310681292" Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.911746 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-thlc7"] Jan 27 14:07:21 crc kubenswrapper[4729]: I0127 14:07:21.911943 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:21 crc kubenswrapper[4729]: E0127 14:07:21.912107 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:22 crc kubenswrapper[4729]: I0127 14:07:22.050245 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:22 crc kubenswrapper[4729]: I0127 14:07:22.050272 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:22 crc kubenswrapper[4729]: E0127 14:07:22.050932 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:22 crc kubenswrapper[4729]: I0127 14:07:22.050296 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:22 crc kubenswrapper[4729]: E0127 14:07:22.051033 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:22 crc kubenswrapper[4729]: E0127 14:07:22.051144 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:23 crc kubenswrapper[4729]: I0127 14:07:23.050335 4729 scope.go:117] "RemoveContainer" containerID="3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be" Jan 27 14:07:23 crc kubenswrapper[4729]: I0127 14:07:23.707308 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/1.log" Jan 27 14:07:23 crc kubenswrapper[4729]: I0127 14:07:23.707367 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerStarted","Data":"bdd8f0c91b4e8a3fe01d49899ee5fc45f9d4d8ff5debdfe48bad7e730d5ca470"} Jan 27 14:07:24 crc kubenswrapper[4729]: I0127 14:07:24.050967 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:24 crc kubenswrapper[4729]: I0127 14:07:24.050987 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:24 crc kubenswrapper[4729]: I0127 14:07:24.051033 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:24 crc kubenswrapper[4729]: I0127 14:07:24.050958 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:24 crc kubenswrapper[4729]: E0127 14:07:24.052675 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:24 crc kubenswrapper[4729]: E0127 14:07:24.052801 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:24 crc kubenswrapper[4729]: E0127 14:07:24.052921 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:24 crc kubenswrapper[4729]: E0127 14:07:24.052844 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:24 crc kubenswrapper[4729]: E0127 14:07:24.197625 4729 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:07:26 crc kubenswrapper[4729]: I0127 14:07:26.050189 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:26 crc kubenswrapper[4729]: I0127 14:07:26.050313 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:26 crc kubenswrapper[4729]: E0127 14:07:26.050457 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:26 crc kubenswrapper[4729]: I0127 14:07:26.050475 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:26 crc kubenswrapper[4729]: I0127 14:07:26.050500 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:26 crc kubenswrapper[4729]: E0127 14:07:26.050588 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:26 crc kubenswrapper[4729]: E0127 14:07:26.050639 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:26 crc kubenswrapper[4729]: E0127 14:07:26.050740 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:28 crc kubenswrapper[4729]: I0127 14:07:28.050230 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:28 crc kubenswrapper[4729]: I0127 14:07:28.050244 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:28 crc kubenswrapper[4729]: I0127 14:07:28.050268 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:28 crc kubenswrapper[4729]: E0127 14:07:28.050736 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 14:07:28 crc kubenswrapper[4729]: E0127 14:07:28.050615 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-thlc7" podUID="c06c7af2-5a87-49e1-82ce-84aa16280c72" Jan 27 14:07:28 crc kubenswrapper[4729]: E0127 14:07:28.050787 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 14:07:28 crc kubenswrapper[4729]: I0127 14:07:28.050337 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:28 crc kubenswrapper[4729]: E0127 14:07:28.050854 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.050403 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.050460 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.050493 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.050713 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.052470 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.053149 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.053357 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.053357 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.053406 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:07:30 crc kubenswrapper[4729]: I0127 14:07:30.053667 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.843979 4729 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.881931 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.882483 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.884761 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t2v8z"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.885436 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.885988 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.886136 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6rfvl"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.886794 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.886830 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.888954 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kdqlt"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.889276 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.889637 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.890138 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891455 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891519 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891542 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmnh\" (UniqueName: \"kubernetes.io/projected/46d3221c-be55-4ab8-95f1-f55bc1eb6596-kube-api-access-dlmnh\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891560 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46d3221c-be55-4ab8-95f1-f55bc1eb6596-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891583 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-images\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891617 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjw5d\" (UniqueName: \"kubernetes.io/projected/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-kube-api-access-kjw5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891636 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891652 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxz8z\" (UniqueName: \"kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891667 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891698 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891714 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.891731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-config\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.898075 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.898323 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.898711 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.898947 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.899056 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.899263 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.899862 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.900107 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.900291 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.900547 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.901224 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.905186 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.905317 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907377 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907475 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907533 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907776 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907802 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: W0127 14:07:32.907832 4729 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: configmaps "console-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 14:07:32 crc kubenswrapper[4729]: E0127 14:07:32.907861 4729 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"console-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907912 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907937 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.907777 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.908096 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.908353 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.908712 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.909191 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.909360 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78zrr"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.909805 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.911388 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.911606 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.911870 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:07:32 crc kubenswrapper[4729]: W0127 14:07:32.911988 4729 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: secrets "console-dockercfg-f62pw" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-console": no relationship found between node 'crc' and this object Jan 27 14:07:32 crc kubenswrapper[4729]: E0127 14:07:32.912008 4729 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"console-dockercfg-f62pw\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-console\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.912064 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.912146 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.912402 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.913187 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.913470 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.913501 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.913509 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.913579 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.921431 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.932794 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.933382 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.933485 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.933655 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.933778 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.934380 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.934536 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.934586 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.935084 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.935299 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.935544 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.935776 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.936190 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.936318 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.936417 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.936898 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.937117 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.938937 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.939560 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.940526 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.944320 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.945569 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-frjsp"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.946472 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.947717 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.947825 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.947977 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.948008 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.947847 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.948160 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.948212 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.948264 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.950011 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.951322 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.951652 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.951816 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.951997 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.955089 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.956177 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.956420 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.956589 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.956764 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.956923 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.957131 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.957265 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.957586 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.957628 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.958467 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.959057 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.959094 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.959475 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.961231 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.961502 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.961913 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.962310 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.962435 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.962638 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wltzd"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.963019 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.963506 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vsf4"] Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.963729 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964015 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964112 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964160 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964072 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964298 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964533 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964564 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:07:32 crc kubenswrapper[4729]: I0127 14:07:32.964906 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.007423 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.007931 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.008063 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.008066 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.008430 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.008475 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.008813 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.009088 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.009284 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.009473 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.009996 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.011675 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxtl\" (UniqueName: \"kubernetes.io/projected/5291212b-828e-4312-aa3a-0187772f076f-kube-api-access-dfxtl\") pod \"downloads-7954f5f757-6rfvl\" (UID: \"5291212b-828e-4312-aa3a-0187772f076f\") " pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.011826 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.011961 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmnh\" (UniqueName: \"kubernetes.io/projected/46d3221c-be55-4ab8-95f1-f55bc1eb6596-kube-api-access-dlmnh\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46d3221c-be55-4ab8-95f1-f55bc1eb6596-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012129 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-images\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012155 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjw5d\" (UniqueName: \"kubernetes.io/projected/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-kube-api-access-kjw5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012246 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012297 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxz8z\" (UniqueName: \"kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.012385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.013421 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.042629 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.042970 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-config\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.043094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.044053 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.044449 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.044700 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.045600 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.046461 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-config\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.046689 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46d3221c-be55-4ab8-95f1-f55bc1eb6596-images\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.047083 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.047179 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.047420 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.048013 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.048601 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.049396 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.049639 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.049802 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.050210 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.050276 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.051023 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-f58jq"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.051692 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.051962 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.052144 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.052318 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.052710 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.052960 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-br7d9"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.053656 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.053862 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.054493 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.054506 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.055007 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.055158 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.055944 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.056727 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.056963 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.057093 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.057315 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.057816 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.058249 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.059009 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.059086 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.059675 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.059991 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.060426 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.060654 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.062427 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.063473 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.066698 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.067484 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-698tz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.067993 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.068340 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.068605 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.069000 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46d3221c-be55-4ab8-95f1-f55bc1eb6596-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.069550 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.069736 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.070228 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.071499 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.071563 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-92w7g"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.071767 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.072158 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.072187 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.072234 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.094574 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.096702 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kdqlt"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.099296 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.099675 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t2v8z"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.101873 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.104912 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6rfvl"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.105075 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.105870 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.109325 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.113480 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.114729 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wltzd"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.115824 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.118003 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8sd69"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.118658 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.120615 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-knf65"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.122130 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.122288 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.123299 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.125604 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.126659 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.127570 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.129023 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.130217 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.131344 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-frjsp"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.132273 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.133865 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-br7d9"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.136036 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.137969 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.140216 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.142628 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.143991 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78zrr"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.144597 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.145071 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.146370 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmcc\" (UniqueName: \"kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.146402 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcdz\" (UniqueName: \"kubernetes.io/projected/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-kube-api-access-fgcdz\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.146674 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vsf4"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147233 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147287 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/662ca662-4c9a-4546-975f-5ac76a11927f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147324 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-auth-proxy-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147346 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbsz\" (UniqueName: \"kubernetes.io/projected/662ca662-4c9a-4546-975f-5ac76a11927f-kube-api-access-5dbsz\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147370 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-serving-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147434 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147453 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147619 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/07eaa085-f9f9-4d4a-9763-c3a216278f2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147644 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-trusted-ca-bundle\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147669 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147716 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147736 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147755 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147775 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147790 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-dir\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147833 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-config\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147790 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8sd69"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147926 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-encryption-config\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.147982 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148079 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148161 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148207 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-serving-cert\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148231 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148255 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-policies\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148276 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148299 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-serving-cert\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148323 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148345 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662ca662-4c9a-4546-975f-5ac76a11927f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148442 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4dfbdbb-85c3-4c93-a321-d67e5438b772-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148474 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650212ba-40b6-4d9e-803d-3556b77bd87e-serving-cert\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148733 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-machine-approver-tls\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148795 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.148805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4dfbdbb-85c3-4c93-a321-d67e5438b772-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkr5g\" (UniqueName: \"kubernetes.io/projected/07eaa085-f9f9-4d4a-9763-c3a216278f2b-kube-api-access-jkr5g\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149081 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5554\" (UniqueName: \"kubernetes.io/projected/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-kube-api-access-g5554\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149101 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nlj\" (UniqueName: \"kubernetes.io/projected/f060881e-4060-4501-bcdc-b8f470d8f53e-kube-api-access-24nlj\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149151 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-client\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149170 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlhw\" (UniqueName: \"kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149217 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-service-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149235 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149279 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-service-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149297 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-serving-cert\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149314 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-config\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150532 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150601 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-node-pullsecrets\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.149760 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-knf65"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150623 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit-dir\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150683 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-trusted-ca\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150767 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.150907 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-92w7g"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151762 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-kube-api-access-2dj9b\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151796 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-serving-cert\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151823 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-client\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151860 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvrk\" (UniqueName: \"kubernetes.io/projected/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-kube-api-access-6rvrk\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151923 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151944 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfgz8\" (UniqueName: \"kubernetes.io/projected/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-kube-api-access-kfgz8\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151965 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.151987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-encryption-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152018 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmvzn\" (UniqueName: \"kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152041 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-image-import-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152059 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152082 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfxtl\" (UniqueName: \"kubernetes.io/projected/5291212b-828e-4312-aa3a-0187772f076f-kube-api-access-dfxtl\") pod \"downloads-7954f5f757-6rfvl\" (UID: \"5291212b-828e-4312-aa3a-0187772f076f\") " pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152494 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152522 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152542 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152562 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcxdv\" (UniqueName: \"kubernetes.io/projected/1b954a21-e466-4ad6-aac5-f3bcea883d63-kube-api-access-pcxdv\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152584 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppmn\" (UniqueName: \"kubernetes.io/projected/650212ba-40b6-4d9e-803d-3556b77bd87e-kube-api-access-bppmn\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152600 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-client\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152689 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07eaa085-f9f9-4d4a-9763-c3a216278f2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152719 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152735 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152772 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152789 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152824 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152853 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.152870 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-config\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.153445 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fdbtl"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.154264 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.154748 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.156261 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.157582 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5q2rv"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.158708 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.158920 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.159863 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.160887 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.161962 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-698tz"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.163098 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.164367 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.164732 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.165513 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.166604 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5q2rv"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.185539 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.205520 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.225054 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.244764 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-config\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253583 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253610 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253635 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-default-certificate\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253656 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-images\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253674 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253688 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-serving-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253790 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-trusted-ca-bundle\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253848 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.253934 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255107 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe31444-42f1-415e-86f8-ce9b8258fe65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255498 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255654 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255764 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255836 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255863 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-config\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255901 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.255935 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe31444-42f1-415e-86f8-ce9b8258fe65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256039 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/849503fd-ce3e-42e9-bae7-596c510d2b8b-tmpfs\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256081 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256108 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/830dd0be-675e-405e-b29c-b2916985aac4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256134 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b146e05-ee69-4211-b20d-5c5342a66a98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256249 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-serving-cert\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256280 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256303 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662ca662-4c9a-4546-975f-5ac76a11927f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256325 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4dfbdbb-85c3-4c93-a321-d67e5438b772-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256342 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650212ba-40b6-4d9e-803d-3556b77bd87e-serving-cert\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256366 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-machine-approver-tls\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256389 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-metrics-certs\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256410 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256422 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256436 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4dfbdbb-85c3-4c93-a321-d67e5438b772-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256457 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkr5g\" (UniqueName: \"kubernetes.io/projected/07eaa085-f9f9-4d4a-9763-c3a216278f2b-kube-api-access-jkr5g\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256482 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nlj\" (UniqueName: \"kubernetes.io/projected/f060881e-4060-4501-bcdc-b8f470d8f53e-kube-api-access-24nlj\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256506 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-client\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256530 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrzr\" (UniqueName: \"kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256547 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256571 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-serving-cert\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256600 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-stats-auth\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256621 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256676 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-metrics-tls\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256810 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-node-pullsecrets\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256833 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit-dir\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-trusted-ca\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256898 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmdqk\" (UniqueName: \"kubernetes.io/projected/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-kube-api-access-rmdqk\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256947 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.256978 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.257006 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-serving-cert\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.257024 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-client\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.258891 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/662ca662-4c9a-4546-975f-5ac76a11927f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.259454 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.259677 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260221 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260264 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260290 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-image-import-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260310 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f92069a8-3117-4693-ace6-f37ac416bf76-proxy-tls\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260336 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzb55\" (UniqueName: \"kubernetes.io/projected/849503fd-ce3e-42e9-bae7-596c510d2b8b-kube-api-access-zzb55\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260361 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54s5\" (UniqueName: \"kubernetes.io/projected/6ce0b622-7220-4c64-ba53-83fe3255d20c-kube-api-access-s54s5\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260395 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ce0b622-7220-4c64-ba53-83fe3255d20c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260434 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcxdv\" (UniqueName: \"kubernetes.io/projected/1b954a21-e466-4ad6-aac5-f3bcea883d63-kube-api-access-pcxdv\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260465 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260497 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bppmn\" (UniqueName: \"kubernetes.io/projected/650212ba-40b6-4d9e-803d-3556b77bd87e-kube-api-access-bppmn\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260537 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260581 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260643 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m98sb\" (UniqueName: \"kubernetes.io/projected/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-kube-api-access-m98sb\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260675 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260697 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbpx\" (UniqueName: \"kubernetes.io/projected/a48f8395-2811-46ab-82ea-0c5afc8c8eee-kube-api-access-5dbpx\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260771 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbmcc\" (UniqueName: \"kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260803 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcdz\" (UniqueName: \"kubernetes.io/projected/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-kube-api-access-fgcdz\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260836 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/662ca662-4c9a-4546-975f-5ac76a11927f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260897 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-auth-proxy-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260927 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a48f8395-2811-46ab-82ea-0c5afc8c8eee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260954 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzfd\" (UniqueName: \"kubernetes.io/projected/ccdf606d-6e18-4dad-b98d-5c4086aa953d-kube-api-access-fqzfd\") pod \"migrator-59844c95c7-hq5wj\" (UID: \"ccdf606d-6e18-4dad-b98d-5c4086aa953d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.260975 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbsz\" (UniqueName: \"kubernetes.io/projected/662ca662-4c9a-4546-975f-5ac76a11927f-kube-api-access-5dbsz\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261190 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6s4h\" (UniqueName: \"kubernetes.io/projected/f92069a8-3117-4693-ace6-f37ac416bf76-kube-api-access-l6s4h\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261256 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261398 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/07eaa085-f9f9-4d4a-9763-c3a216278f2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261441 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261466 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261489 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-dir\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261520 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261551 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/892e841d-08ce-4e49-9238-70bd0f1268f8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-config\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261631 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-encryption-config\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261674 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261707 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkf6t\" (UniqueName: \"kubernetes.io/projected/e18f11c9-d605-41a8-9443-214c8d6a5c85-kube-api-access-hkf6t\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f6mf\" (UniqueName: \"kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261770 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261832 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b146e05-ee69-4211-b20d-5c5342a66a98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261863 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/892e841d-08ce-4e49-9238-70bd0f1268f8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-policies\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.261970 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262002 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-serving-cert\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262043 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262092 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5554\" (UniqueName: \"kubernetes.io/projected/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-kube-api-access-g5554\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262128 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlhw\" (UniqueName: \"kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262189 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262212 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-service-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262241 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-service-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262273 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-config\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srn6t\" (UniqueName: \"kubernetes.io/projected/cbe31444-42f1-415e-86f8-ce9b8258fe65-kube-api-access-srn6t\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262332 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e841d-08ce-4e49-9238-70bd0f1268f8-config\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262356 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262511 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262538 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-kube-api-access-2dj9b\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262558 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262576 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18f11c9-d605-41a8-9443-214c8d6a5c85-service-ca-bundle\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262611 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262630 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rvrk\" (UniqueName: \"kubernetes.io/projected/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-kube-api-access-6rvrk\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262653 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b146e05-ee69-4211-b20d-5c5342a66a98-config\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262673 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfgz8\" (UniqueName: \"kubernetes.io/projected/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-kube-api-access-kfgz8\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262694 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-encryption-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262716 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj8t\" (UniqueName: \"kubernetes.io/projected/830dd0be-675e-405e-b29c-b2916985aac4-kube-api-access-shj8t\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262750 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmvzn\" (UniqueName: \"kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262781 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262809 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262828 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-serving-cert\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262868 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262941 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-client\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07eaa085-f9f9-4d4a-9763-c3a216278f2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263007 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263041 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263076 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263122 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4dfbdbb-85c3-4c93-a321-d67e5438b772-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-machine-approver-tls\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.263872 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-node-pullsecrets\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.264333 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-serving-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.264551 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-trusted-ca-bundle\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.264811 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/07eaa085-f9f9-4d4a-9763-c3a216278f2b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.265289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-trusted-ca\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.265715 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-service-ca-bundle\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.266356 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650212ba-40b6-4d9e-803d-3556b77bd87e-config\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.266440 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-service-ca\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.267424 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.267438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.267588 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-policies\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.268164 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.268223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.268434 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.269398 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f060881e-4060-4501-bcdc-b8f470d8f53e-audit-dir\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.269730 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.269815 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f060881e-4060-4501-bcdc-b8f470d8f53e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.270576 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.270799 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.270855 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.271172 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-config\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.271505 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-encryption-config\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.271660 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/650212ba-40b6-4d9e-803d-3556b77bd87e-serving-cert\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.272059 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.272179 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.262000 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1b954a21-e466-4ad6-aac5-f3bcea883d63-audit-dir\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.272537 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-serving-cert\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.272708 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.273692 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-etcd-client\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.273947 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274024 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274340 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.272975 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274562 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-serving-cert\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274941 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.274959 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/c4dfbdbb-85c3-4c93-a321-d67e5438b772-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.275774 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-serving-cert\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.275810 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.276559 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1b954a21-e466-4ad6-aac5-f3bcea883d63-image-import-ca\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.276653 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-etcd-client\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.276702 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07eaa085-f9f9-4d4a-9763-c3a216278f2b-serving-cert\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.278066 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-auth-proxy-config\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.278125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/662ca662-4c9a-4546-975f-5ac76a11927f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.278796 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1b954a21-e466-4ad6-aac5-f3bcea883d63-etcd-client\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.279110 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.279338 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.279463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.279605 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f060881e-4060-4501-bcdc-b8f470d8f53e-encryption-config\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.284847 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.305067 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.341161 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxz8z\" (UniqueName: \"kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z\") pod \"controller-manager-879f6c89f-zgq57\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.362041 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmnh\" (UniqueName: \"kubernetes.io/projected/46d3221c-be55-4ab8-95f1-f55bc1eb6596-kube-api-access-dlmnh\") pod \"machine-api-operator-5694c8668f-t2v8z\" (UID: \"46d3221c-be55-4ab8-95f1-f55bc1eb6596\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.363972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbpx\" (UniqueName: \"kubernetes.io/projected/a48f8395-2811-46ab-82ea-0c5afc8c8eee-kube-api-access-5dbpx\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364022 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a48f8395-2811-46ab-82ea-0c5afc8c8eee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364088 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzfd\" (UniqueName: \"kubernetes.io/projected/ccdf606d-6e18-4dad-b98d-5c4086aa953d-kube-api-access-fqzfd\") pod \"migrator-59844c95c7-hq5wj\" (UID: \"ccdf606d-6e18-4dad-b98d-5c4086aa953d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364132 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6s4h\" (UniqueName: \"kubernetes.io/projected/f92069a8-3117-4693-ace6-f37ac416bf76-kube-api-access-l6s4h\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364184 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkf6t\" (UniqueName: \"kubernetes.io/projected/e18f11c9-d605-41a8-9443-214c8d6a5c85-kube-api-access-hkf6t\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f6mf\" (UniqueName: \"kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364254 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/892e841d-08ce-4e49-9238-70bd0f1268f8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364278 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364298 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b146e05-ee69-4211-b20d-5c5342a66a98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364318 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/892e841d-08ce-4e49-9238-70bd0f1268f8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364345 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364370 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364388 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srn6t\" (UniqueName: \"kubernetes.io/projected/cbe31444-42f1-415e-86f8-ce9b8258fe65-kube-api-access-srn6t\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364405 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e841d-08ce-4e49-9238-70bd0f1268f8-config\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364437 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18f11c9-d605-41a8-9443-214c8d6a5c85-service-ca-bundle\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364470 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364544 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b146e05-ee69-4211-b20d-5c5342a66a98-config\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364575 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shj8t\" (UniqueName: \"kubernetes.io/projected/830dd0be-675e-405e-b29c-b2916985aac4-kube-api-access-shj8t\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364624 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364643 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-default-certificate\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364663 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-images\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364692 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364710 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe31444-42f1-415e-86f8-ce9b8258fe65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364729 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe31444-42f1-415e-86f8-ce9b8258fe65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364747 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/849503fd-ce3e-42e9-bae7-596c510d2b8b-tmpfs\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364779 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/830dd0be-675e-405e-b29c-b2916985aac4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364796 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b146e05-ee69-4211-b20d-5c5342a66a98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364816 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-metrics-certs\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364844 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrzr\" (UniqueName: \"kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364864 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-stats-auth\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364896 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364915 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-metrics-tls\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364935 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmdqk\" (UniqueName: \"kubernetes.io/projected/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-kube-api-access-rmdqk\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f92069a8-3117-4693-ace6-f37ac416bf76-proxy-tls\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364992 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzb55\" (UniqueName: \"kubernetes.io/projected/849503fd-ce3e-42e9-bae7-596c510d2b8b-kube-api-access-zzb55\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365029 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ce0b622-7220-4c64-ba53-83fe3255d20c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365058 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54s5\" (UniqueName: \"kubernetes.io/projected/6ce0b622-7220-4c64-ba53-83fe3255d20c-kube-api-access-s54s5\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365082 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365151 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m98sb\" (UniqueName: \"kubernetes.io/projected/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-kube-api-access-m98sb\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.365597 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/849503fd-ce3e-42e9-bae7-596c510d2b8b-tmpfs\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.364952 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.366660 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/830dd0be-675e-405e-b29c-b2916985aac4-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.368065 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe31444-42f1-415e-86f8-ce9b8258fe65-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.369130 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbe31444-42f1-415e-86f8-ce9b8258fe65-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.403431 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjw5d\" (UniqueName: \"kubernetes.io/projected/ba1b44ea-f831-4fe6-b5de-fe824a484cf3-kube-api-access-kjw5d\") pod \"openshift-controller-manager-operator-756b6f6bc6-dm9ps\" (UID: \"ba1b44ea-f831-4fe6-b5de-fe824a484cf3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.406089 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.425249 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.444729 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.445701 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b146e05-ee69-4211-b20d-5c5342a66a98-config\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.464716 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.477577 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b146e05-ee69-4211-b20d-5c5342a66a98-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.484809 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.504693 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.505976 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.518332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/892e841d-08ce-4e49-9238-70bd0f1268f8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.522125 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.526537 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.544806 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.545631 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/892e841d-08ce-4e49-9238-70bd0f1268f8-config\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.556643 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.565468 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.585732 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.589583 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-stats-auth\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.605859 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.616256 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18f11c9-d605-41a8-9443-214c8d6a5c85-service-ca-bundle\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.625950 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.640723 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-default-certificate\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.645178 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.665283 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.669206 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e18f11c9-d605-41a8-9443-214c8d6a5c85-metrics-certs\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.693251 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.705375 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.705790 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.724682 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.727079 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps"] Jan 27 14:07:33 crc kubenswrapper[4729]: W0127 14:07:33.738636 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba1b44ea_f831_4fe6_b5de_fe824a484cf3.slice/crio-003c3ca85b883acf02b60ba4a03eaa4d96a68bccf123859b0521e8ad462fa9d1 WatchSource:0}: Error finding container 003c3ca85b883acf02b60ba4a03eaa4d96a68bccf123859b0521e8ad462fa9d1: Status 404 returned error can't find the container with id 003c3ca85b883acf02b60ba4a03eaa4d96a68bccf123859b0521e8ad462fa9d1 Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.742199 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" event={"ID":"5e3f6beb-ef04-47ba-8738-849691b10351","Type":"ContainerStarted","Data":"4e3ece4aea83e4fdaab8a40c3d51dff50cfa98605e00cc21112bf7ed44ef4117"} Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.744816 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.759107 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t2v8z"] Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.765143 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: W0127 14:07:33.778297 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46d3221c_be55_4ab8_95f1_f55bc1eb6596.slice/crio-346287e60693dc17bf2d5cf5c6163ed95a2606cca0195e16bb51185c61231047 WatchSource:0}: Error finding container 346287e60693dc17bf2d5cf5c6163ed95a2606cca0195e16bb51185c61231047: Status 404 returned error can't find the container with id 346287e60693dc17bf2d5cf5c6163ed95a2606cca0195e16bb51185c61231047 Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.785135 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.805391 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.825723 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.840237 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-metrics-tls\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.848512 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.866056 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.886132 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.905078 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.925088 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.929811 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ce0b622-7220-4c64-ba53-83fe3255d20c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.945470 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.956326 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f92069a8-3117-4693-ace6-f37ac416bf76-images\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.965171 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:07:33 crc kubenswrapper[4729]: I0127 14:07:33.986648 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.005615 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.010818 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f92069a8-3117-4693-ace6-f37ac416bf76-proxy-tls\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.025053 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.046700 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.063783 4729 request.go:700] Waited for 1.005565982s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.065538 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.086630 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.100250 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.105414 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.125953 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.138332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a48f8395-2811-46ab-82ea-0c5afc8c8eee-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.146256 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.166010 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.186344 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.221179 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.225576 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.246271 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.265358 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.267450 4729 configmap.go:193] Couldn't get configMap openshift-console/console-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.267548 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config podName:8e60df4d-540b-489f-a297-46f35014add0 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.767527245 +0000 UTC m=+141.351718249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config") pod "console-f9d7485db-cg69z" (UID: "8e60df4d-540b-489f-a297-46f35014add0") : failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.284792 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.305420 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.316676 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.325158 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.345062 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.364486 4729 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.364584 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert podName:849503fd-ce3e-42e9-bae7-596c510d2b8b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.864562106 +0000 UTC m=+141.448753110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert") pod "packageserver-d55dfcdfc-krhqs" (UID: "849503fd-ce3e-42e9-bae7-596c510d2b8b") : failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.364722 4729 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.364835 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls podName:830dd0be-675e-405e-b29c-b2916985aac4 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.864813024 +0000 UTC m=+141.449004028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls") pod "machine-config-controller-84d6567774-5g6nd" (UID: "830dd0be-675e-405e-b29c-b2916985aac4") : failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.364962 4729 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365020 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert podName:849503fd-ce3e-42e9-bae7-596c510d2b8b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.86500458 +0000 UTC m=+141.449195634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert") pod "packageserver-d55dfcdfc-krhqs" (UID: "849503fd-ce3e-42e9-bae7-596c510d2b8b") : failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365078 4729 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365138 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls podName:ca2d17dd-20d7-40ed-a0d8-36fd0e037842 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.865110033 +0000 UTC m=+141.449301037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls") pod "dns-default-knf65" (UID: "ca2d17dd-20d7-40ed-a0d8-36fd0e037842") : failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.365635 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365722 4729 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365761 4729 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365764 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume podName:ca2d17dd-20d7-40ed-a0d8-36fd0e037842 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.865752093 +0000 UTC m=+141.449943117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume") pod "dns-default-knf65" (UID: "ca2d17dd-20d7-40ed-a0d8-36fd0e037842") : failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365840 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca podName:16dd44fa-b221-497c-a9fa-7dcf08359ab1 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.865830257 +0000 UTC m=+141.450021331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca") pod "marketplace-operator-79b997595-d8548" (UID: "16dd44fa-b221-497c-a9fa-7dcf08359ab1") : failed to sync configmap cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365778 4729 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: E0127 14:07:34.365974 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics podName:16dd44fa-b221-497c-a9fa-7dcf08359ab1 nodeName:}" failed. No retries permitted until 2026-01-27 14:07:34.86592792 +0000 UTC m=+141.450118984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics") pod "marketplace-operator-79b997595-d8548" (UID: "16dd44fa-b221-497c-a9fa-7dcf08359ab1") : failed to sync secret cache: timed out waiting for the condition Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.384817 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.405556 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.424719 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.446278 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.465755 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.490900 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.505102 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.528363 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.544345 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.570206 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.585949 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.606002 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.630570 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.645597 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.665669 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.685503 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.706066 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.725133 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.746625 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.757506 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" event={"ID":"5e3f6beb-ef04-47ba-8738-849691b10351","Type":"ContainerStarted","Data":"079db0abcf08b6b51cb2847f212a902b22d17ec6953a79c00eef2fa980085c8e"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.757685 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.758945 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zgq57 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.758986 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.760395 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" event={"ID":"46d3221c-be55-4ab8-95f1-f55bc1eb6596","Type":"ContainerStarted","Data":"c7beda2abefc6acf843a3ba1b959f91421bcef15356d87e866f387fa4002ad8e"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.760641 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" event={"ID":"46d3221c-be55-4ab8-95f1-f55bc1eb6596","Type":"ContainerStarted","Data":"746ed413b4072f0d056c8a29bdffb2293dff6d7d51715e0af3633001f272b848"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.760651 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" event={"ID":"46d3221c-be55-4ab8-95f1-f55bc1eb6596","Type":"ContainerStarted","Data":"346287e60693dc17bf2d5cf5c6163ed95a2606cca0195e16bb51185c61231047"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.763624 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" event={"ID":"ba1b44ea-f831-4fe6-b5de-fe824a484cf3","Type":"ContainerStarted","Data":"549657bfbe11803bd95db3015c2a432fa794abfea20863d11f860d146dce223d"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.763680 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" event={"ID":"ba1b44ea-f831-4fe6-b5de-fe824a484cf3","Type":"ContainerStarted","Data":"003c3ca85b883acf02b60ba4a03eaa4d96a68bccf123859b0521e8ad462fa9d1"} Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.764719 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.785412 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.793725 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.805447 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.825861 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.846034 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895237 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895315 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895430 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895460 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895536 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895578 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.895609 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.896310 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-config-volume\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.900082 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.901365 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-metrics-tls\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.901424 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-webhook-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.902058 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/849503fd-ce3e-42e9-bae7-596c510d2b8b-apiservice-cert\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.903578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfxtl\" (UniqueName: \"kubernetes.io/projected/5291212b-828e-4312-aa3a-0187772f076f-kube-api-access-dfxtl\") pod \"downloads-7954f5f757-6rfvl\" (UID: \"5291212b-828e-4312-aa3a-0187772f076f\") " pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.904346 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.905747 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.906302 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/830dd0be-675e-405e-b29c-b2916985aac4-proxy-tls\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.926604 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.945060 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.965251 4729 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:07:34 crc kubenswrapper[4729]: I0127 14:07:34.986265 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.005333 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.043216 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2231abf6-4f19-4ee2-9c0b-c0125b08cc77-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rbthg\" (UID: \"2231abf6-4f19-4ee2-9c0b-c0125b08cc77\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.047104 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.063916 4729 request.go:700] Waited for 1.804945145s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.067463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nlj\" (UniqueName: \"kubernetes.io/projected/f060881e-4060-4501-bcdc-b8f470d8f53e-kube-api-access-24nlj\") pod \"apiserver-7bbb656c7d-sbq9j\" (UID: \"f060881e-4060-4501-bcdc-b8f470d8f53e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.082339 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkr5g\" (UniqueName: \"kubernetes.io/projected/07eaa085-f9f9-4d4a-9763-c3a216278f2b-kube-api-access-jkr5g\") pod \"openshift-config-operator-7777fb866f-2xxhk\" (UID: \"07eaa085-f9f9-4d4a-9763-c3a216278f2b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.100383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5554\" (UniqueName: \"kubernetes.io/projected/6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d-kube-api-access-g5554\") pod \"etcd-operator-b45778765-wltzd\" (UID: \"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.119574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlhw\" (UniqueName: \"kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw\") pod \"oauth-openshift-558db77b4-89l7c\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.151100 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbmcc\" (UniqueName: \"kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.161622 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcdz\" (UniqueName: \"kubernetes.io/projected/6364490e-1548-4bfb-8a2a-0fb5d2dddddb-kube-api-access-fgcdz\") pod \"machine-approver-56656f9798-qddhn\" (UID: \"6364490e-1548-4bfb-8a2a-0fb5d2dddddb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.183245 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcxdv\" (UniqueName: \"kubernetes.io/projected/1b954a21-e466-4ad6-aac5-f3bcea883d63-kube-api-access-pcxdv\") pod \"apiserver-76f77b778f-frjsp\" (UID: \"1b954a21-e466-4ad6-aac5-f3bcea883d63\") " pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.200601 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dj9b\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-kube-api-access-2dj9b\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.206951 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.218299 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6rfvl"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.224675 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rvrk\" (UniqueName: \"kubernetes.io/projected/55aa3ae7-48f5-4b9a-9820-9c3c7aece43f-kube-api-access-6rvrk\") pod \"console-operator-58897d9998-kdqlt\" (UID: \"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f\") " pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:35 crc kubenswrapper[4729]: W0127 14:07:35.228340 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5291212b_828e_4312_aa3a_0187772f076f.slice/crio-2b7419356323480b7c49480ff41f98c7b72ae13255c0e88c63d5b7a813d5c96d WatchSource:0}: Error finding container 2b7419356323480b7c49480ff41f98c7b72ae13255c0e88c63d5b7a813d5c96d: Status 404 returned error can't find the container with id 2b7419356323480b7c49480ff41f98c7b72ae13255c0e88c63d5b7a813d5c96d Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.234179 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.241275 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfgz8\" (UniqueName: \"kubernetes.io/projected/4c94d47c-c97c-493b-bdd6-cba399e6d9bc-kube-api-access-kfgz8\") pod \"multus-admission-controller-857f4d67dd-9vsf4\" (UID: \"4c94d47c-c97c-493b-bdd6-cba399e6d9bc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.242693 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:35 crc kubenswrapper[4729]: W0127 14:07:35.257496 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6364490e_1548_4bfb_8a2a_0fb5d2dddddb.slice/crio-a0c2c93694843fb6d343a303609582a359a49712969aa93f26793ff6947b3fbd WatchSource:0}: Error finding container a0c2c93694843fb6d343a303609582a359a49712969aa93f26793ff6947b3fbd: Status 404 returned error can't find the container with id a0c2c93694843fb6d343a303609582a359a49712969aa93f26793ff6947b3fbd Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.263682 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c4dfbdbb-85c3-4c93-a321-d67e5438b772-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-94sjk\" (UID: \"c4dfbdbb-85c3-4c93-a321-d67e5438b772\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.268749 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.279433 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmvzn\" (UniqueName: \"kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn\") pod \"route-controller-manager-6576b87f9c-dx8cq\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.287966 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.292486 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.301165 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.318136 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bppmn\" (UniqueName: \"kubernetes.io/projected/650212ba-40b6-4d9e-803d-3556b77bd87e-kube-api-access-bppmn\") pod \"authentication-operator-69f744f599-78zrr\" (UID: \"650212ba-40b6-4d9e-803d-3556b77bd87e\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.323162 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbsz\" (UniqueName: \"kubernetes.io/projected/662ca662-4c9a-4546-975f-5ac76a11927f-kube-api-access-5dbsz\") pod \"openshift-apiserver-operator-796bbdcf4f-565qs\" (UID: \"662ca662-4c9a-4546-975f-5ac76a11927f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.355833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbpx\" (UniqueName: \"kubernetes.io/projected/a48f8395-2811-46ab-82ea-0c5afc8c8eee-kube-api-access-5dbpx\") pod \"package-server-manager-789f6589d5-m8ksr\" (UID: \"a48f8395-2811-46ab-82ea-0c5afc8c8eee\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.359643 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzfd\" (UniqueName: \"kubernetes.io/projected/ccdf606d-6e18-4dad-b98d-5c4086aa953d-kube-api-access-fqzfd\") pod \"migrator-59844c95c7-hq5wj\" (UID: \"ccdf606d-6e18-4dad-b98d-5c4086aa953d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.366686 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.392438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6s4h\" (UniqueName: \"kubernetes.io/projected/f92069a8-3117-4693-ace6-f37ac416bf76-kube-api-access-l6s4h\") pod \"machine-config-operator-74547568cd-8w2tz\" (UID: \"f92069a8-3117-4693-ace6-f37ac416bf76\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.407231 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkf6t\" (UniqueName: \"kubernetes.io/projected/e18f11c9-d605-41a8-9443-214c8d6a5c85-kube-api-access-hkf6t\") pod \"router-default-5444994796-f58jq\" (UID: \"e18f11c9-d605-41a8-9443-214c8d6a5c85\") " pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.410949 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.422655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f6mf\" (UniqueName: \"kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf\") pod \"marketplace-operator-79b997595-d8548\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.433054 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.439300 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.448425 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/892e841d-08ce-4e49-9238-70bd0f1268f8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lmb8r\" (UID: \"892e841d-08ce-4e49-9238-70bd0f1268f8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.453988 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.461174 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srn6t\" (UniqueName: \"kubernetes.io/projected/cbe31444-42f1-415e-86f8-ce9b8258fe65-kube-api-access-srn6t\") pod \"kube-storage-version-migrator-operator-b67b599dd-wldws\" (UID: \"cbe31444-42f1-415e-86f8-ce9b8258fe65\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.470063 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.480191 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj8t\" (UniqueName: \"kubernetes.io/projected/830dd0be-675e-405e-b29c-b2916985aac4-kube-api-access-shj8t\") pod \"machine-config-controller-84d6567774-5g6nd\" (UID: \"830dd0be-675e-405e-b29c-b2916985aac4\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.508482 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrzr\" (UniqueName: \"kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr\") pod \"collect-profiles-29492040-d92vv\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.510361 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.518399 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.518626 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.527937 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.528372 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m98sb\" (UniqueName: \"kubernetes.io/projected/d6ec7e01-d640-484f-8b1e-37accfa6c3d2-kube-api-access-m98sb\") pod \"dns-operator-744455d44c-br7d9\" (UID: \"d6ec7e01-d640-484f-8b1e-37accfa6c3d2\") " pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.540331 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmdqk\" (UniqueName: \"kubernetes.io/projected/ca2d17dd-20d7-40ed-a0d8-36fd0e037842-kube-api-access-rmdqk\") pod \"dns-default-knf65\" (UID: \"ca2d17dd-20d7-40ed-a0d8-36fd0e037842\") " pod="openshift-dns/dns-default-knf65" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.552388 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.555212 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-knf65" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.559429 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.564470 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1b146e05-ee69-4211-b20d-5c5342a66a98-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ss7lk\" (UID: \"1b146e05-ee69-4211-b20d-5c5342a66a98\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.591943 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54s5\" (UniqueName: \"kubernetes.io/projected/6ce0b622-7220-4c64-ba53-83fe3255d20c-kube-api-access-s54s5\") pod \"control-plane-machine-set-operator-78cbb6b69f-rjlbl\" (UID: \"6ce0b622-7220-4c64-ba53-83fe3255d20c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.611373 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.617248 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzb55\" (UniqueName: \"kubernetes.io/projected/849503fd-ce3e-42e9-bae7-596c510d2b8b-kube-api-access-zzb55\") pod \"packageserver-d55dfcdfc-krhqs\" (UID: \"849503fd-ce3e-42e9-bae7-596c510d2b8b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.617527 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.625106 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.628181 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.639532 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.640354 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") pod \"console-f9d7485db-cg69z\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.654105 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-frjsp"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.654362 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.657255 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713130 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqnw\" (UniqueName: \"kubernetes.io/projected/913e35a2-8a8a-46d5-8e6e-cac158b42871-kube-api-access-4nqnw\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713171 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-srv-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713222 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkmmd\" (UniqueName: \"kubernetes.io/projected/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-kube-api-access-tkmmd\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713255 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/670cb5da-a4e2-4685-8dff-e8963eccaab3-metrics-tls\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713274 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-srv-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713303 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6gs\" (UniqueName: \"kubernetes.io/projected/84a971b1-5f64-4604-85b4-79d8fb8ba040-kube-api-access-pn6gs\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713356 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713410 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqb4b\" (UniqueName: \"kubernetes.io/projected/e755ab51-7011-46bf-a62c-00aeef6ac4fc-kube-api-access-dqb4b\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713430 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-cabundle\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713502 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670cb5da-a4e2-4685-8dff-e8963eccaab3-trusted-ca\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713541 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713562 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713590 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713611 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/913e35a2-8a8a-46d5-8e6e-cac158b42871-cert\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713653 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e755ab51-7011-46bf-a62c-00aeef6ac4fc-serving-cert\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713704 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-key\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713737 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713787 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713837 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhx95\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-kube-api-access-dhx95\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713890 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713945 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2zr\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.713983 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.714006 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.714030 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk4zx\" (UniqueName: \"kubernetes.io/projected/c7186209-1e51-47db-8ced-ec6f4717b60f-kube-api-access-vk4zx\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.714065 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.714088 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e755ab51-7011-46bf-a62c-00aeef6ac4fc-config\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.714110 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5ct7\" (UniqueName: \"kubernetes.io/projected/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-kube-api-access-m5ct7\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: E0127 14:07:35.720473 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.220455712 +0000 UTC m=+142.804646796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.722142 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" Jan 27 14:07:35 crc kubenswrapper[4729]: W0127 14:07:35.743749 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b954a21_e466_4ad6_aac5_f3bcea883d63.slice/crio-dc11035b01d9b4f7f388ca661a3aca6e96cb882caa34fe4fe072125b0173cea7 WatchSource:0}: Error finding container dc11035b01d9b4f7f388ca661a3aca6e96cb882caa34fe4fe072125b0173cea7: Status 404 returned error can't find the container with id dc11035b01d9b4f7f388ca661a3aca6e96cb882caa34fe4fe072125b0173cea7 Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.747995 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.777564 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.783197 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wltzd"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.790400 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9vsf4"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.809280 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.813104 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" event={"ID":"6364490e-1548-4bfb-8a2a-0fb5d2dddddb","Type":"ContainerStarted","Data":"8273cacbbbf90f7dfca3cfd5ad3eaef446cbd3d3e2f772bacf735f3ab5b3cac8"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.813150 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" event={"ID":"6364490e-1548-4bfb-8a2a-0fb5d2dddddb","Type":"ContainerStarted","Data":"a0c2c93694843fb6d343a303609582a359a49712969aa93f26793ff6947b3fbd"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.814302 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.814671 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.814914 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/670cb5da-a4e2-4685-8dff-e8963eccaab3-metrics-tls\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.814950 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-srv-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.814979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6gs\" (UniqueName: \"kubernetes.io/projected/84a971b1-5f64-4604-85b4-79d8fb8ba040-kube-api-access-pn6gs\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815128 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815163 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-certs\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815184 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-csi-data-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815233 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-plugins-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815288 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqb4b\" (UniqueName: \"kubernetes.io/projected/e755ab51-7011-46bf-a62c-00aeef6ac4fc-kube-api-access-dqb4b\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-cabundle\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815427 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670cb5da-a4e2-4685-8dff-e8963eccaab3-trusted-ca\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815465 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815482 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815558 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815602 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-registration-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815630 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815649 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/913e35a2-8a8a-46d5-8e6e-cac158b42871-cert\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815671 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e755ab51-7011-46bf-a62c-00aeef6ac4fc-serving-cert\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815727 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-node-bootstrap-token\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815772 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-key\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815843 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: E0127 14:07:35.815900 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.315860261 +0000 UTC m=+142.900051315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.815917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816021 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhx95\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-kube-api-access-dhx95\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816091 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816172 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4h6n\" (UniqueName: \"kubernetes.io/projected/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-kube-api-access-t4h6n\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl2zr\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816434 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816604 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk4zx\" (UniqueName: \"kubernetes.io/projected/c7186209-1e51-47db-8ced-ec6f4717b60f-kube-api-access-vk4zx\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816630 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816708 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e755ab51-7011-46bf-a62c-00aeef6ac4fc-config\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816745 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhjzg\" (UniqueName: \"kubernetes.io/projected/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-kube-api-access-jhjzg\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816805 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5ct7\" (UniqueName: \"kubernetes.io/projected/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-kube-api-access-m5ct7\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816912 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nqnw\" (UniqueName: \"kubernetes.io/projected/913e35a2-8a8a-46d5-8e6e-cac158b42871-kube-api-access-4nqnw\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816940 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-srv-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.816982 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-socket-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.817005 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-mountpoint-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.817031 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkmmd\" (UniqueName: \"kubernetes.io/projected/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-kube-api-access-tkmmd\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.821783 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" event={"ID":"1b954a21-e466-4ad6-aac5-f3bcea883d63","Type":"ContainerStarted","Data":"dc11035b01d9b4f7f388ca661a3aca6e96cb882caa34fe4fe072125b0173cea7"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.822509 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/913e35a2-8a8a-46d5-8e6e-cac158b42871-cert\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.823221 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-cabundle\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.824239 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/670cb5da-a4e2-4685-8dff-e8963eccaab3-trusted-ca\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.827350 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.828809 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.829012 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.829867 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e755ab51-7011-46bf-a62c-00aeef6ac4fc-config\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.831713 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" event={"ID":"07eaa085-f9f9-4d4a-9763-c3a216278f2b","Type":"ContainerStarted","Data":"a6e69f0983c4536a424ae93386da2cf7f7ad783a72669c542fd9491c57fa0ec8"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.850275 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e755ab51-7011-46bf-a62c-00aeef6ac4fc-serving-cert\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.851143 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c7186209-1e51-47db-8ced-ec6f4717b60f-signing-key\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.851325 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-srv-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.851696 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.852231 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84a971b1-5f64-4604-85b4-79d8fb8ba040-srv-cert\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.852648 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.853644 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.853899 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.860817 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6rfvl" event={"ID":"5291212b-828e-4312-aa3a-0187772f076f","Type":"ContainerStarted","Data":"690df5e674971c7af9cf662efdcde8fc3b0936c7a42d8e79e921238300664609"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.860924 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6rfvl" event={"ID":"5291212b-828e-4312-aa3a-0187772f076f","Type":"ContainerStarted","Data":"2b7419356323480b7c49480ff41f98c7b72ae13255c0e88c63d5b7a813d5c96d"} Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.861801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.863959 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/670cb5da-a4e2-4685-8dff-e8963eccaab3-metrics-tls\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: W0127 14:07:35.864701 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d714a1d_71b8_4daa_a1ce_ce9c0fb7b92d.slice/crio-d28b916a74d6cf4eff5a48eb87db1f4a09cc4f890eb92bbccb35f155d6b4ee47 WatchSource:0}: Error finding container d28b916a74d6cf4eff5a48eb87db1f4a09cc4f890eb92bbccb35f155d6b4ee47: Status 404 returned error can't find the container with id d28b916a74d6cf4eff5a48eb87db1f4a09cc4f890eb92bbccb35f155d6b4ee47 Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.873274 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.873369 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.877031 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.881694 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.886785 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.895680 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.898401 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6gs\" (UniqueName: \"kubernetes.io/projected/84a971b1-5f64-4604-85b4-79d8fb8ba040-kube-api-access-pn6gs\") pod \"olm-operator-6b444d44fb-bhcnj\" (UID: \"84a971b1-5f64-4604-85b4-79d8fb8ba040\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.918966 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.919924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4h6n\" (UniqueName: \"kubernetes.io/projected/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-kube-api-access-t4h6n\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.919986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhjzg\" (UniqueName: \"kubernetes.io/projected/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-kube-api-access-jhjzg\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920038 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-socket-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920054 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-mountpoint-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920109 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-certs\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920126 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-csi-data-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920156 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-plugins-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920244 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-registration-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920267 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-node-bootstrap-token\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.920299 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.922266 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-plugins-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.922458 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-registration-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.922458 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-csi-data-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: E0127 14:07:35.922738 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.422725485 +0000 UTC m=+143.006916489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.923198 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-mountpoint-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.923238 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-socket-dir\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:35 crc kubenswrapper[4729]: W0127 14:07:35.924473 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c94d47c_c97c_493b_bdd6_cba399e6d9bc.slice/crio-a87e9906da4ca9488f7a0370a0f16299db62ee4d0f575c3d1f3812d4edb45728 WatchSource:0}: Error finding container a87e9906da4ca9488f7a0370a0f16299db62ee4d0f575c3d1f3812d4edb45728: Status 404 returned error can't find the container with id a87e9906da4ca9488f7a0370a0f16299db62ee4d0f575c3d1f3812d4edb45728 Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.929667 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-node-bootstrap-token\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.934760 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-certs\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.935839 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-kdqlt"] Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.935988 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqb4b\" (UniqueName: \"kubernetes.io/projected/e755ab51-7011-46bf-a62c-00aeef6ac4fc-kube-api-access-dqb4b\") pod \"service-ca-operator-777779d784-92w7g\" (UID: \"e755ab51-7011-46bf-a62c-00aeef6ac4fc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.954459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkmmd\" (UniqueName: \"kubernetes.io/projected/b04b84a3-e0a2-4271-b805-dbc5b3ff67be-kube-api-access-tkmmd\") pod \"cluster-samples-operator-665b6dd947-9rzkd\" (UID: \"b04b84a3-e0a2-4271-b805-dbc5b3ff67be\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.958656 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.969034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5ct7\" (UniqueName: \"kubernetes.io/projected/5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c-kube-api-access-m5ct7\") pod \"catalog-operator-68c6474976-9kmmz\" (UID: \"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:35 crc kubenswrapper[4729]: I0127 14:07:35.985784 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nqnw\" (UniqueName: \"kubernetes.io/projected/913e35a2-8a8a-46d5-8e6e-cac158b42871-kube-api-access-4nqnw\") pod \"ingress-canary-8sd69\" (UID: \"913e35a2-8a8a-46d5-8e6e-cac158b42871\") " pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.022043 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.022329 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.522310648 +0000 UTC m=+143.106501652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.022429 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.022693 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.52268531 +0000 UTC m=+143.106876314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.035499 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk4zx\" (UniqueName: \"kubernetes.io/projected/c7186209-1e51-47db-8ced-ec6f4717b60f-kube-api-access-vk4zx\") pod \"service-ca-9c57cc56f-698tz\" (UID: \"c7186209-1e51-47db-8ced-ec6f4717b60f\") " pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.041931 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.051359 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl2zr\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.056621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhx95\" (UniqueName: \"kubernetes.io/projected/670cb5da-a4e2-4685-8dff-e8963eccaab3-kube-api-access-dhx95\") pod \"ingress-operator-5b745b69d9-p8bjj\" (UID: \"670cb5da-a4e2-4685-8dff-e8963eccaab3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.060478 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.105461 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-698tz" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.106746 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.107116 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.117572 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4h6n\" (UniqueName: \"kubernetes.io/projected/f3c004e3-2eca-4fed-ac6b-9e1eb31fb511-kube-api-access-t4h6n\") pod \"csi-hostpathplugin-5q2rv\" (UID: \"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511\") " pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.124420 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.124897 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.624861665 +0000 UTC m=+143.209052669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.127542 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.134664 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhjzg\" (UniqueName: \"kubernetes.io/projected/5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd-kube-api-access-jhjzg\") pod \"machine-config-server-fdbtl\" (UID: \"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd\") " pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.142649 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.145576 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8sd69" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.176666 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fdbtl" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.181680 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.188569 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.235576 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.235849 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.735837311 +0000 UTC m=+143.320028315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.308775 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.336577 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.336899 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.836852368 +0000 UTC m=+143.421043362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.337015 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.337380 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.837373476 +0000 UTC m=+143.421564480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.437920 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.438152 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.938125285 +0000 UTC m=+143.522316299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.438330 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.438669 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:36.938658092 +0000 UTC m=+143.522849166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.540760 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.541391 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.041374854 +0000 UTC m=+143.625565858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.596921 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.648138 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.648583 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.148564059 +0000 UTC m=+143.732755123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.748769 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.749128 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.249111052 +0000 UTC m=+143.833302056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: W0127 14:07:36.779532 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16dd44fa_b221_497c_a9fa_7dcf08359ab1.slice/crio-7b1c1cc4222bae6249b12806564d20257db53fc8d6d1f728b4ee136a21ce5a93 WatchSource:0}: Error finding container 7b1c1cc4222bae6249b12806564d20257db53fc8d6d1f728b4ee136a21ce5a93: Status 404 returned error can't find the container with id 7b1c1cc4222bae6249b12806564d20257db53fc8d6d1f728b4ee136a21ce5a93 Jan 27 14:07:36 crc kubenswrapper[4729]: W0127 14:07:36.801139 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a41dccf_e8e0_4bfb_aec4_eb7915e3efdd.slice/crio-2ed06b1a4a60a380cdd44836188791aca1fddfda64b82b277d231659979c1ba9 WatchSource:0}: Error finding container 2ed06b1a4a60a380cdd44836188791aca1fddfda64b82b277d231659979c1ba9: Status 404 returned error can't find the container with id 2ed06b1a4a60a380cdd44836188791aca1fddfda64b82b277d231659979c1ba9 Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.860014 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.860377 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.360363537 +0000 UTC m=+143.944554541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: W0127 14:07:36.867234 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccdf606d_6e18_4dad_b98d_5c4086aa953d.slice/crio-6f4294cf7a5d1d8fed1ac3da203cca62dce56ca4080d9aaeeab4a44af21a44eb WatchSource:0}: Error finding container 6f4294cf7a5d1d8fed1ac3da203cca62dce56ca4080d9aaeeab4a44af21a44eb: Status 404 returned error can't find the container with id 6f4294cf7a5d1d8fed1ac3da203cca62dce56ca4080d9aaeeab4a44af21a44eb Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.909965 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.914643 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" event={"ID":"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f","Type":"ContainerStarted","Data":"7fa1f940eec5f7abcc9c058f6073ec1e3ecb9d2e2eb1ce5defff051e185f497d"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.917810 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fdbtl" event={"ID":"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd","Type":"ContainerStarted","Data":"2ed06b1a4a60a380cdd44836188791aca1fddfda64b82b277d231659979c1ba9"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.929306 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.931214 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" event={"ID":"f060881e-4060-4501-bcdc-b8f470d8f53e","Type":"ContainerStarted","Data":"e397182a2206c884990c992056a22f786495ead47edfdbd5c6c332b9281da260"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.934275 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" event={"ID":"a385db05-78b0-4bb7-80b9-e0089b92e40c","Type":"ContainerStarted","Data":"0af8ce5f334aa59b664d81f707aa9a81764fb53e0e035f3a9ef932b9eb93c4bf"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.935070 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" event={"ID":"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d","Type":"ContainerStarted","Data":"d28b916a74d6cf4eff5a48eb87db1f4a09cc4f890eb92bbccb35f155d6b4ee47"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.948445 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-78zrr"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.951101 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs"] Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.961967 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:36 crc kubenswrapper[4729]: E0127 14:07:36.962383 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.462366556 +0000 UTC m=+144.046557560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.979041 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" event={"ID":"2231abf6-4f19-4ee2-9c0b-c0125b08cc77","Type":"ContainerStarted","Data":"5bd45f009ba86defc8e35ccda818c2abd6655d2dbb72a0eabca76dc2d9f9bf9f"} Jan 27 14:07:36 crc kubenswrapper[4729]: I0127 14:07:36.986610 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" event={"ID":"16dd44fa-b221-497c-a9fa-7dcf08359ab1","Type":"ContainerStarted","Data":"7b1c1cc4222bae6249b12806564d20257db53fc8d6d1f728b4ee136a21ce5a93"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.001785 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" event={"ID":"6364490e-1548-4bfb-8a2a-0fb5d2dddddb","Type":"ContainerStarted","Data":"da8863959ebbee71f1d3230a6bd671b0062e0c78030f7d5554c79f46bee7e90e"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.022116 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" event={"ID":"cb22ecac-005d-414f-928c-5714be9f7596","Type":"ContainerStarted","Data":"896773e02dc9b8e4d58c2490a987da1ae4744c09f49ad3b63350ef537721fe98"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.025787 4729 generic.go:334] "Generic (PLEG): container finished" podID="07eaa085-f9f9-4d4a-9763-c3a216278f2b" containerID="81c819b449ff0e41daab1eb5063bcea6c9d2348b6dc8e1f375bdc875e5927640" exitCode=0 Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.025848 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" event={"ID":"07eaa085-f9f9-4d4a-9763-c3a216278f2b","Type":"ContainerDied","Data":"81c819b449ff0e41daab1eb5063bcea6c9d2348b6dc8e1f375bdc875e5927640"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.037234 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dm9ps" podStartSLOduration=123.037181416 podStartE2EDuration="2m3.037181416s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.036180225 +0000 UTC m=+143.620371229" watchObservedRunningTime="2026-01-27 14:07:37.037181416 +0000 UTC m=+143.621372420" Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.054421 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" event={"ID":"a48f8395-2811-46ab-82ea-0c5afc8c8eee","Type":"ContainerStarted","Data":"7c4b81a95c8f7063b3b3a4a423251bf9f192b9a8ad434e414dc2190c02a8c869"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.056535 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-f58jq" event={"ID":"e18f11c9-d605-41a8-9443-214c8d6a5c85","Type":"ContainerStarted","Data":"f6adad19c0ef00b077d0eb79b5373e66025d848d7704559954bc2fe1069b4112"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.059052 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" event={"ID":"f92069a8-3117-4693-ace6-f37ac416bf76","Type":"ContainerStarted","Data":"f6226a61301b2d9999cb969ab6d8d24182f7b22ea4aac361faecf5ed09257e69"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.063334 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.064237 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.564223391 +0000 UTC m=+144.148414395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.064273 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.064311 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.065398 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" event={"ID":"4c94d47c-c97c-493b-bdd6-cba399e6d9bc","Type":"ContainerStarted","Data":"a87e9906da4ca9488f7a0370a0f16299db62ee4d0f575c3d1f3812d4edb45728"} Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.153505 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podStartSLOduration=123.153482104 podStartE2EDuration="2m3.153482104s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.086033808 +0000 UTC m=+143.670224822" watchObservedRunningTime="2026-01-27 14:07:37.153482104 +0000 UTC m=+143.737673108" Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.167517 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.171607 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.671580971 +0000 UTC m=+144.255771995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.225269 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.268756 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.269123 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.769111498 +0000 UTC m=+144.353302492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.372672 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.373180 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.873118201 +0000 UTC m=+144.457309215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.373534 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.374050 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.874032581 +0000 UTC m=+144.458223585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.475600 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.476018 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:37.975993829 +0000 UTC m=+144.560184873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.551495 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-t2v8z" podStartSLOduration=122.551466491 podStartE2EDuration="2m2.551466491s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.506718471 +0000 UTC m=+144.090909495" watchObservedRunningTime="2026-01-27 14:07:37.551466491 +0000 UTC m=+144.135657495" Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.577421 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.577687 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.077675287 +0000 UTC m=+144.661866291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.583386 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.600485 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.619928 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.619987 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-knf65"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.655635 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-br7d9"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.657157 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.681377 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.682048 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.182008232 +0000 UTC m=+144.766199236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: W0127 14:07:37.727932 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a971b1_5f64_4604_85b4_79d8fb8ba040.slice/crio-40076a9a57553ca227e67fa11811323d3f6253f5a7130198fe6fc0db19161eff WatchSource:0}: Error finding container 40076a9a57553ca227e67fa11811323d3f6253f5a7130198fe6fc0db19161eff: Status 404 returned error can't find the container with id 40076a9a57553ca227e67fa11811323d3f6253f5a7130198fe6fc0db19161eff Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.783366 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6rfvl" podStartSLOduration=123.78334463 podStartE2EDuration="2m3.78334463s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.759782247 +0000 UTC m=+144.343973251" watchObservedRunningTime="2026-01-27 14:07:37.78334463 +0000 UTC m=+144.367535624" Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.784361 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.784774 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.284760175 +0000 UTC m=+144.868951179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.786818 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.789075 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj"] Jan 27 14:07:37 crc kubenswrapper[4729]: W0127 14:07:37.805346 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ec7e01_d640_484f_8b1e_37accfa6c3d2.slice/crio-07923333a5d75563141e42cb13979cfd0ac2111488b5e8fec46b34df0335252e WatchSource:0}: Error finding container 07923333a5d75563141e42cb13979cfd0ac2111488b5e8fec46b34df0335252e: Status 404 returned error can't find the container with id 07923333a5d75563141e42cb13979cfd0ac2111488b5e8fec46b34df0335252e Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.850939 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.851112 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qddhn" podStartSLOduration=124.851091885 podStartE2EDuration="2m4.851091885s" podCreationTimestamp="2026-01-27 14:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:37.84312221 +0000 UTC m=+144.427313214" watchObservedRunningTime="2026-01-27 14:07:37.851091885 +0000 UTC m=+144.435282899" Jan 27 14:07:37 crc kubenswrapper[4729]: W0127 14:07:37.865319 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892e841d_08ce_4e49_9238_70bd0f1268f8.slice/crio-62cb16f0018a5f840f651dda08daae4e2e486744108d1c8c6d3d6dadcbb05bb7 WatchSource:0}: Error finding container 62cb16f0018a5f840f651dda08daae4e2e486744108d1c8c6d3d6dadcbb05bb7: Status 404 returned error can't find the container with id 62cb16f0018a5f840f651dda08daae4e2e486744108d1c8c6d3d6dadcbb05bb7 Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.885492 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:37 crc kubenswrapper[4729]: W0127 14:07:37.885968 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670cb5da_a4e2_4685_8dff_e8963eccaab3.slice/crio-ce030fc1ec3b6ca44faae3af561337d9bdd1b479801dbffc910259953906997b WatchSource:0}: Error finding container ce030fc1ec3b6ca44faae3af561337d9bdd1b479801dbffc910259953906997b: Status 404 returned error can't find the container with id ce030fc1ec3b6ca44faae3af561337d9bdd1b479801dbffc910259953906997b Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.886167 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.386144335 +0000 UTC m=+144.970335379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.977093 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.990607 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv"] Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.990697 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:37 crc kubenswrapper[4729]: E0127 14:07:37.991019 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.491003466 +0000 UTC m=+145.075194470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:37 crc kubenswrapper[4729]: I0127 14:07:37.991328 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-698tz"] Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.019844 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8sd69"] Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.045804 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5q2rv"] Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.084693 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" event={"ID":"84a971b1-5f64-4604-85b4-79d8fb8ba040","Type":"ContainerStarted","Data":"40076a9a57553ca227e67fa11811323d3f6253f5a7130198fe6fc0db19161eff"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.088220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" event={"ID":"6d714a1d-71b8-4daa-a1ce-ce9c0fb7b92d","Type":"ContainerStarted","Data":"3a42277971589f86ab670534d989bf8449a964b8800e75a72387e80e6b08e285"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.092315 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.092695 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.592681364 +0000 UTC m=+145.176872368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.115354 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cg69z" event={"ID":"8e60df4d-540b-489f-a297-46f35014add0","Type":"ContainerStarted","Data":"a1d8ad70d27ebcd86cf61da706c5300662e4e86d872708b4a2dfa885a5f5554f"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.128119 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" event={"ID":"650212ba-40b6-4d9e-803d-3556b77bd87e","Type":"ContainerStarted","Data":"a01ed08fe130d929927cb270c1fde162b05b798a19b90c9388aa74db7220f496"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.135164 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" event={"ID":"e59601bb-7561-4555-99e9-0e6faf392716","Type":"ContainerStarted","Data":"89c93e2b11c63daa6fa962d406b41b63d8e8ab71ce311271f7beb00e20bf0e1f"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.136954 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" event={"ID":"1b146e05-ee69-4211-b20d-5c5342a66a98","Type":"ContainerStarted","Data":"f03ed0232d956d3c9ff6890e83f27ec3cb7555de9613057803c69205d636cac8"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.143550 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knf65" event={"ID":"ca2d17dd-20d7-40ed-a0d8-36fd0e037842","Type":"ContainerStarted","Data":"7419330e82898be75559ce1821cf20a3e4411c72e37d44e67827033accdb7446"} Jan 27 14:07:38 crc kubenswrapper[4729]: W0127 14:07:38.168640 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7186209_1e51_47db_8ced_ec6f4717b60f.slice/crio-53c684ae8f8f75368a597bf0b40bf00d3c5f489c390c44f2266191004e649c08 WatchSource:0}: Error finding container 53c684ae8f8f75368a597bf0b40bf00d3c5f489c390c44f2266191004e649c08: Status 404 returned error can't find the container with id 53c684ae8f8f75368a597bf0b40bf00d3c5f489c390c44f2266191004e649c08 Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.183947 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" event={"ID":"2231abf6-4f19-4ee2-9c0b-c0125b08cc77","Type":"ContainerStarted","Data":"cf1fee5076b4702bae97d5992000fc7faee11caa529b2b47544f3db552ce459c"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.184581 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wltzd" podStartSLOduration=124.184566851 podStartE2EDuration="2m4.184566851s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.183376692 +0000 UTC m=+144.767567686" watchObservedRunningTime="2026-01-27 14:07:38.184566851 +0000 UTC m=+144.768757855" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.194252 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.195635 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.695618964 +0000 UTC m=+145.279809968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.206539 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz"] Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.214889 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rbthg" podStartSLOduration=123.214856719 podStartE2EDuration="2m3.214856719s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.212118691 +0000 UTC m=+144.796309715" watchObservedRunningTime="2026-01-27 14:07:38.214856719 +0000 UTC m=+144.799047723" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.221060 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" event={"ID":"662ca662-4c9a-4546-975f-5ac76a11927f","Type":"ContainerStarted","Data":"d28e001b5c9d9e5a60e5df92045787abed8dc999cae477ab229971861eed9906"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.230558 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" event={"ID":"d6ec7e01-d640-484f-8b1e-37accfa6c3d2","Type":"ContainerStarted","Data":"07923333a5d75563141e42cb13979cfd0ac2111488b5e8fec46b34df0335252e"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.236453 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" event={"ID":"07eaa085-f9f9-4d4a-9763-c3a216278f2b","Type":"ContainerStarted","Data":"290d7a2cf96b0f7dc683e9944d65b305bcc8b414228c417c0822e08766bd9a17"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.236743 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.239217 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" event={"ID":"55aa3ae7-48f5-4b9a-9820-9c3c7aece43f","Type":"ContainerStarted","Data":"4158aff7702753eb6bccc39e2cc3d8477556fe0c89bf7cb4f4c8f1976d4c24fe"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.240122 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.249891 4729 patch_prober.go:28] interesting pod/console-operator-58897d9998-kdqlt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.249943 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" podUID="55aa3ae7-48f5-4b9a-9820-9c3c7aece43f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.253811 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" event={"ID":"849503fd-ce3e-42e9-bae7-596c510d2b8b","Type":"ContainerStarted","Data":"f4705d64660beb7b8594d64f2c6997d5fbeb0fb3249b05ac936a6bf9b8bd1b70"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.271204 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" event={"ID":"f060881e-4060-4501-bcdc-b8f470d8f53e","Type":"ContainerStarted","Data":"b3cdbb96e5ffa8297b42e69b60e0391684de6bdd6e39ab75ec3d66fce56b487a"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.299146 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.300347 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.800308569 +0000 UTC m=+145.384499643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.313951 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" podStartSLOduration=124.313934444 podStartE2EDuration="2m4.313934444s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.27969359 +0000 UTC m=+144.863884584" watchObservedRunningTime="2026-01-27 14:07:38.313934444 +0000 UTC m=+144.898125458" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.316375 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" event={"ID":"f92069a8-3117-4693-ace6-f37ac416bf76","Type":"ContainerStarted","Data":"01d70e56b9c91af1c326766c63d0e8d74426c72cb99a2ef08ecc632b60431283"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.345045 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" podStartSLOduration=124.345029378 podStartE2EDuration="2m4.345029378s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.336061831 +0000 UTC m=+144.920252835" watchObservedRunningTime="2026-01-27 14:07:38.345029378 +0000 UTC m=+144.929220382" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.348192 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-92w7g"] Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.369238 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" event={"ID":"892e841d-08ce-4e49-9238-70bd0f1268f8","Type":"ContainerStarted","Data":"62cb16f0018a5f840f651dda08daae4e2e486744108d1c8c6d3d6dadcbb05bb7"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.387296 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" event={"ID":"c4dfbdbb-85c3-4c93-a321-d67e5438b772","Type":"ContainerStarted","Data":"05b391784d4d5db7a22013e3f85080c9be37a049796b7a50e409a4ce78f98449"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.398223 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" event={"ID":"ccdf606d-6e18-4dad-b98d-5c4086aa953d","Type":"ContainerStarted","Data":"6f4294cf7a5d1d8fed1ac3da203cca62dce56ca4080d9aaeeab4a44af21a44eb"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.400396 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.400658 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:38.900647575 +0000 UTC m=+145.484838579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.408565 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" event={"ID":"6ce0b622-7220-4c64-ba53-83fe3255d20c","Type":"ContainerStarted","Data":"3209a85946ff7466ee70f4c22eb3f8f026241a76d3ffadf07412a68820f1cd96"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.428613 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-f58jq" event={"ID":"e18f11c9-d605-41a8-9443-214c8d6a5c85","Type":"ContainerStarted","Data":"8ea2f7120466ceeaa05e48defdf505d23a23fc8087ddbd9ae05216ce578fef8d"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.454073 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-f58jq" podStartSLOduration=123.454053491 podStartE2EDuration="2m3.454053491s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.453441932 +0000 UTC m=+145.037632946" watchObservedRunningTime="2026-01-27 14:07:38.454053491 +0000 UTC m=+145.038244495" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.473828 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" event={"ID":"a385db05-78b0-4bb7-80b9-e0089b92e40c","Type":"ContainerStarted","Data":"b0d0a843915a529a44c244061109317f3c581b3026e6fcc73701d2efd9dcd059"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.474506 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.478503 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" event={"ID":"a48f8395-2811-46ab-82ea-0c5afc8c8eee","Type":"ContainerStarted","Data":"c40e6862c10de227a256a984ea4a1828003586ed588693675c0ed040571447a6"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.502365 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.503735 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.003714469 +0000 UTC m=+145.587905473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.509763 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" event={"ID":"cbe31444-42f1-415e-86f8-ce9b8258fe65","Type":"ContainerStarted","Data":"44d0379349ba8a24c2c45f3bc751adfb45307053857fae7b3c8fa919b3ba801a"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.519725 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podStartSLOduration=123.519698749 podStartE2EDuration="2m3.519698749s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:38.515385101 +0000 UTC m=+145.099576115" watchObservedRunningTime="2026-01-27 14:07:38.519698749 +0000 UTC m=+145.103889743" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.533321 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" event={"ID":"830dd0be-675e-405e-b29c-b2916985aac4","Type":"ContainerStarted","Data":"2c0066c49724123b494c6d469c3f73305ab1cfbfc04d216f4f67f6114688ef53"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.537019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" event={"ID":"4c94d47c-c97c-493b-bdd6-cba399e6d9bc","Type":"ContainerStarted","Data":"a4a6e4cba9739135143484787745c96f078ed28cc7bacc1b6090971443406d13"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.551800 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" event={"ID":"670cb5da-a4e2-4685-8dff-e8963eccaab3","Type":"ContainerStarted","Data":"ce030fc1ec3b6ca44faae3af561337d9bdd1b479801dbffc910259953906997b"} Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.554183 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.554256 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.604785 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.606801 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.106772261 +0000 UTC m=+145.690963375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.642052 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.665482 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.677850 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:38 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:38 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:38 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.678219 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.695292 4729 csr.go:261] certificate signing request csr-5vbkk is approved, waiting to be issued Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.708368 4729 csr.go:257] certificate signing request csr-5vbkk is issued Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.710152 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.710417 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.210403333 +0000 UTC m=+145.794594337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.811996 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.812353 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.31234029 +0000 UTC m=+145.896531294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.914397 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.914526 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.414500435 +0000 UTC m=+145.998691439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:38 crc kubenswrapper[4729]: I0127 14:07:38.915062 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:38 crc kubenswrapper[4729]: E0127 14:07:38.915371 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.415363903 +0000 UTC m=+145.999554897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.029596 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.030783 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.53076701 +0000 UTC m=+146.114958014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.132486 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.132934 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.632919314 +0000 UTC m=+146.217110318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.235382 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.235986 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.735964317 +0000 UTC m=+146.320155331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.236074 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.236488 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.736479023 +0000 UTC m=+146.320670027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.337449 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.337588 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.837563363 +0000 UTC m=+146.421754377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.337958 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.338464 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.838441372 +0000 UTC m=+146.422632376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.440010 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.440334 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.940300277 +0000 UTC m=+146.524491281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.440553 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.440940 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:39.940925666 +0000 UTC m=+146.525116670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.542297 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.543123 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.043104661 +0000 UTC m=+146.627295665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.592966 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" event={"ID":"a48f8395-2811-46ab-82ea-0c5afc8c8eee","Type":"ContainerStarted","Data":"4b391612cd6b81db5b90c121cba01f2701a8da055d532258e94eaadcfa0714c0"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.593292 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.606441 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cg69z" event={"ID":"8e60df4d-540b-489f-a297-46f35014add0","Type":"ContainerStarted","Data":"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.620212 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" podStartSLOduration=124.620197145 podStartE2EDuration="2m4.620197145s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.617862231 +0000 UTC m=+146.202053235" watchObservedRunningTime="2026-01-27 14:07:39.620197145 +0000 UTC m=+146.204388149" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.624941 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" event={"ID":"b04b84a3-e0a2-4271-b805-dbc5b3ff67be","Type":"ContainerStarted","Data":"6d62c869ab2a4852465d2c6a7aced7deaf7798a08553243ac6b6c3a4fef96de3"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.624994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" event={"ID":"b04b84a3-e0a2-4271-b805-dbc5b3ff67be","Type":"ContainerStarted","Data":"6c0593a7779b5222373a02456677a17252a351328ffe0372d5e78f35d2bf491a"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.625008 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" event={"ID":"b04b84a3-e0a2-4271-b805-dbc5b3ff67be","Type":"ContainerStarted","Data":"02011e68be58b428ca04a70957cf7b843cf0e6e7c2461be3fe4ca4bc42a8de85"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.633563 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" event={"ID":"cb22ecac-005d-414f-928c-5714be9f7596","Type":"ContainerStarted","Data":"df4a4c19caf4dddd9cb6db2689c4d167040642247d15e3eb1109b7a0c3bb2c43"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.634387 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.638990 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" event={"ID":"830dd0be-675e-405e-b29c-b2916985aac4","Type":"ContainerStarted","Data":"4907351dbe0a59c59ee3ca5724f670ff4d9245032e39d4708c0833540c7b51da"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.639049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" event={"ID":"830dd0be-675e-405e-b29c-b2916985aac4","Type":"ContainerStarted","Data":"fa493736ff37ee9e40e022869ec6f5475522fee58eecf27e4af20867e75b512c"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.641857 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cg69z" podStartSLOduration=125.641842997 podStartE2EDuration="2m5.641842997s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.641454594 +0000 UTC m=+146.225645598" watchObservedRunningTime="2026-01-27 14:07:39.641842997 +0000 UTC m=+146.226034001" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.644699 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.646844 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.146829106 +0000 UTC m=+146.731020210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.655707 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:39 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:39 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:39 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.656024 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.666986 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" event={"ID":"d6ec7e01-d640-484f-8b1e-37accfa6c3d2","Type":"ContainerStarted","Data":"5f50899f0b84af705e39df1fa1daf3cb8a12f305517d20b691623ed4e155b32e"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.673028 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" event={"ID":"849503fd-ce3e-42e9-bae7-596c510d2b8b","Type":"ContainerStarted","Data":"e7982a95b0dcfa1e7cee4b8c564d721ccf3bb79bc5587d89450175e1daaaaf72"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.673933 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.682547 4729 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-krhqs container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.683390 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" podUID="849503fd-ce3e-42e9-bae7-596c510d2b8b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.683810 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fdbtl" event={"ID":"5a41dccf-e8e0-4bfb-aec4-eb7915e3efdd","Type":"ContainerStarted","Data":"f6262f043b9c0af8fe9e9c3cc184b1ca7a138699c0f46deda75cad6f29359125"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.701588 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5g6nd" podStartSLOduration=124.701572935 podStartE2EDuration="2m4.701572935s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.697460263 +0000 UTC m=+146.281651267" watchObservedRunningTime="2026-01-27 14:07:39.701572935 +0000 UTC m=+146.285763949" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.701795 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9rzkd" podStartSLOduration=125.701790743 podStartE2EDuration="2m5.701790743s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.672453595 +0000 UTC m=+146.256644599" watchObservedRunningTime="2026-01-27 14:07:39.701790743 +0000 UTC m=+146.285981747" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.706815 4729 generic.go:334] "Generic (PLEG): container finished" podID="1b954a21-e466-4ad6-aac5-f3bcea883d63" containerID="f7b34b5a63425f035046f3bc183d1116bc25319a660f877198b1b2d58787a853" exitCode=0 Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.706943 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" event={"ID":"1b954a21-e466-4ad6-aac5-f3bcea883d63","Type":"ContainerDied","Data":"f7b34b5a63425f035046f3bc183d1116bc25319a660f877198b1b2d58787a853"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.709513 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 14:02:38 +0000 UTC, rotation deadline is 2026-11-04 14:32:11.891331213 +0000 UTC Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.709549 4729 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6744h24m32.181785383s for next certificate rotation Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.721727 4729 generic.go:334] "Generic (PLEG): container finished" podID="f060881e-4060-4501-bcdc-b8f470d8f53e" containerID="b3cdbb96e5ffa8297b42e69b60e0391684de6bdd6e39ab75ec3d66fce56b487a" exitCode=0 Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.721806 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" event={"ID":"f060881e-4060-4501-bcdc-b8f470d8f53e","Type":"ContainerDied","Data":"b3cdbb96e5ffa8297b42e69b60e0391684de6bdd6e39ab75ec3d66fce56b487a"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.735668 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" event={"ID":"f92069a8-3117-4693-ace6-f37ac416bf76","Type":"ContainerStarted","Data":"90679deb64e2bd8d7012cf882358aac945bae805488476d3a6b2936fa58f6e7a"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.740827 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8sd69" event={"ID":"913e35a2-8a8a-46d5-8e6e-cac158b42871","Type":"ContainerStarted","Data":"09f856d8690c92a33ffb1767ce19d88ffc5887b4abd250f52f862fb73732f08f"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.740865 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8sd69" event={"ID":"913e35a2-8a8a-46d5-8e6e-cac158b42871","Type":"ContainerStarted","Data":"e6d4c3b260e28e642413c4943c78fd7b0dab180fed2a41e5d338cc2c102d765f"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.755903 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" event={"ID":"4c94d47c-c97c-493b-bdd6-cba399e6d9bc","Type":"ContainerStarted","Data":"3db1a6b48fa62ccf8b9d9b9158e96bef629088d6582b7c9675407b7e151dfed8"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.759648 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" event={"ID":"662ca662-4c9a-4546-975f-5ac76a11927f","Type":"ContainerStarted","Data":"a045827f30de192ebd62c21ebea00b71c04a683ac80cc7f976ddea2291f242bc"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.763491 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" podStartSLOduration=125.763472654 podStartE2EDuration="2m5.763472654s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.762650307 +0000 UTC m=+146.346841321" watchObservedRunningTime="2026-01-27 14:07:39.763472654 +0000 UTC m=+146.347663668" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.775747 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.783435 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.28340461 +0000 UTC m=+146.867595614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.861028 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" event={"ID":"16dd44fa-b221-497c-a9fa-7dcf08359ab1","Type":"ContainerStarted","Data":"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.861863 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.870051 4729 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d8548 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.870107 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.877682 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.880090 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.380076509 +0000 UTC m=+146.964267583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.906120 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" event={"ID":"e755ab51-7011-46bf-a62c-00aeef6ac4fc","Type":"ContainerStarted","Data":"0df7dcea8d972863c8279e6879a2f1da3ca80889e0154252d0ecfddf1d7e2872"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.906164 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" event={"ID":"e755ab51-7011-46bf-a62c-00aeef6ac4fc","Type":"ContainerStarted","Data":"b60152bc081561ae33b8ba15ccb8e1c7c5220aba3e0d3c0c903fa3141adc48df"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.912564 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" podStartSLOduration=124.912547907 podStartE2EDuration="2m4.912547907s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.880049868 +0000 UTC m=+146.464240872" watchObservedRunningTime="2026-01-27 14:07:39.912547907 +0000 UTC m=+146.496738911" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.914494 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" podStartSLOduration=125.914487988 podStartE2EDuration="2m5.914487988s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.912131854 +0000 UTC m=+146.496322858" watchObservedRunningTime="2026-01-27 14:07:39.914487988 +0000 UTC m=+146.498678992" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.917791 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" event={"ID":"892e841d-08ce-4e49-9238-70bd0f1268f8","Type":"ContainerStarted","Data":"9009bdf32b9d9cd8711871b033e4233975deadde37f2d97d53ad03b08ebe22df"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.944538 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" event={"ID":"e59601bb-7561-4555-99e9-0e6faf392716","Type":"ContainerStarted","Data":"f8d1d0b8899d63924602beb57acdf6f382a18ed9a1aac9e316e063e1a73b56b1"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.960595 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9vsf4" podStartSLOduration=124.960576341 podStartE2EDuration="2m4.960576341s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.939288921 +0000 UTC m=+146.523479925" watchObservedRunningTime="2026-01-27 14:07:39.960576341 +0000 UTC m=+146.544767345" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.961343 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" event={"ID":"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c","Type":"ContainerStarted","Data":"d879260ab51c896a96a8eab88d44e4f3da1ee03009db082b2045d07ff22db01a"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.961377 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" event={"ID":"5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c","Type":"ContainerStarted","Data":"69314dedf4719c76799e5f125e078a09a8f8c785deaddcb359147277b8d70cf5"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.962381 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.972249 4729 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9kmmz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.972301 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" podUID="5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.979184 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:39 crc kubenswrapper[4729]: E0127 14:07:39.980697 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.480681814 +0000 UTC m=+147.064872818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.989049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" event={"ID":"cbe31444-42f1-415e-86f8-ce9b8258fe65","Type":"ContainerStarted","Data":"f810ffd318a09a583c2f542da8c75d5214143e0481aaf6fb0d52b3f0660a0857"} Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.990012 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-565qs" podStartSLOduration=125.989990371 podStartE2EDuration="2m5.989990371s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.961045287 +0000 UTC m=+146.545236321" watchObservedRunningTime="2026-01-27 14:07:39.989990371 +0000 UTC m=+146.574181375" Jan 27 14:07:39 crc kubenswrapper[4729]: I0127 14:07:39.996125 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" event={"ID":"670cb5da-a4e2-4685-8dff-e8963eccaab3","Type":"ContainerStarted","Data":"1a746a14e0cfa83d9265795d7aba73cb42e2b5046c23e6a09d10d57ef18bdd9b"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.016031 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8sd69" podStartSLOduration=8.016012973 podStartE2EDuration="8.016012973s" podCreationTimestamp="2026-01-27 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:39.990157836 +0000 UTC m=+146.574348840" watchObservedRunningTime="2026-01-27 14:07:40.016012973 +0000 UTC m=+146.600204007" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.016656 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8w2tz" podStartSLOduration=125.016650553 podStartE2EDuration="2m5.016650553s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.014132343 +0000 UTC m=+146.598323357" watchObservedRunningTime="2026-01-27 14:07:40.016650553 +0000 UTC m=+146.600841557" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.022910 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" event={"ID":"6ce0b622-7220-4c64-ba53-83fe3255d20c","Type":"ContainerStarted","Data":"96d4adbe13d119520e0c98e7ecf7d212fdcf09a4ead40d45f24582070604c862"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.050576 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" event={"ID":"84a971b1-5f64-4604-85b4-79d8fb8ba040","Type":"ContainerStarted","Data":"2899b53fba4995af43e7cb0f893712008ae2e72a7dff7d26e16233b5907aa2a8"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.072838 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.076929 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" event={"ID":"c4dfbdbb-85c3-4c93-a321-d67e5438b772","Type":"ContainerStarted","Data":"4c04c474852d2b0bd2ee23ab153c87ca8e756bfc75f442fe6651c20d3a6c1a5c"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.083683 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.085308 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.585290876 +0000 UTC m=+147.169481930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.093699 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.098571 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fdbtl" podStartSLOduration=8.09855703 podStartE2EDuration="8.09855703s" podCreationTimestamp="2026-01-27 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.051786816 +0000 UTC m=+146.635977830" watchObservedRunningTime="2026-01-27 14:07:40.09855703 +0000 UTC m=+146.682748034" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.099025 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" event={"ID":"ccdf606d-6e18-4dad-b98d-5c4086aa953d","Type":"ContainerStarted","Data":"4e9d928a47aa5c4a7d677c4162bbb3585710fc094c34d1f12f19da9f646a9606"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.099048 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" event={"ID":"ccdf606d-6e18-4dad-b98d-5c4086aa953d","Type":"ContainerStarted","Data":"6bc698591ff52506c2ba8b1b3bd441a0fbdc1ebee6469fbc8495b19437019118"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.121132 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" event={"ID":"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511","Type":"ContainerStarted","Data":"ce8ed6845e1648bbe2602bc2664a90e98c52ada5934b220fc9d46bf23a304d97"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.131228 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" event={"ID":"650212ba-40b6-4d9e-803d-3556b77bd87e","Type":"ContainerStarted","Data":"7b2915957210ac933d2a97348cb826ebf782842ef10aba49367c0c55c6f76df7"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.132223 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rjlbl" podStartSLOduration=125.132205786 podStartE2EDuration="2m5.132205786s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.099630735 +0000 UTC m=+146.683821739" watchObservedRunningTime="2026-01-27 14:07:40.132205786 +0000 UTC m=+146.716396780" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.132780 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lmb8r" podStartSLOduration=125.132776283 podStartE2EDuration="2m5.132776283s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.131228014 +0000 UTC m=+146.715419018" watchObservedRunningTime="2026-01-27 14:07:40.132776283 +0000 UTC m=+146.716967287" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.134814 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-698tz" event={"ID":"c7186209-1e51-47db-8ced-ec6f4717b60f","Type":"ContainerStarted","Data":"75d90e836bbd9d8437a7d4c6474b1ec7cef5b718680e8a1735e88e6193af4859"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.134855 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-698tz" event={"ID":"c7186209-1e51-47db-8ced-ec6f4717b60f","Type":"ContainerStarted","Data":"53c684ae8f8f75368a597bf0b40bf00d3c5f489c390c44f2266191004e649c08"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.166121 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knf65" event={"ID":"ca2d17dd-20d7-40ed-a0d8-36fd0e037842","Type":"ContainerStarted","Data":"45d132584854c1bc538fd78f91d5d68077684f684dbb688d6d41e69bf0617f24"} Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.166170 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-knf65" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.186604 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.192241 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.692217913 +0000 UTC m=+147.276408917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.197078 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" podStartSLOduration=125.197054257 podStartE2EDuration="2m5.197054257s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.191360106 +0000 UTC m=+146.775551120" watchObservedRunningTime="2026-01-27 14:07:40.197054257 +0000 UTC m=+146.781245261" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.227923 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" podStartSLOduration=125.227904253 podStartE2EDuration="2m5.227904253s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.226304492 +0000 UTC m=+146.810495516" watchObservedRunningTime="2026-01-27 14:07:40.227904253 +0000 UTC m=+146.812095257" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.264757 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-92w7g" podStartSLOduration=125.264743171 podStartE2EDuration="2m5.264743171s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.263358897 +0000 UTC m=+146.847549911" watchObservedRunningTime="2026-01-27 14:07:40.264743171 +0000 UTC m=+146.848934185" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.290732 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.291071 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.791057191 +0000 UTC m=+147.375248195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.329913 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" podStartSLOduration=126.329869522 podStartE2EDuration="2m6.329869522s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.293708456 +0000 UTC m=+146.877899460" watchObservedRunningTime="2026-01-27 14:07:40.329869522 +0000 UTC m=+146.914060526" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.331248 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhcnj" podStartSLOduration=125.331240465 podStartE2EDuration="2m5.331240465s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.321854515 +0000 UTC m=+146.906045539" watchObservedRunningTime="2026-01-27 14:07:40.331240465 +0000 UTC m=+146.915431469" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.354326 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" podStartSLOduration=125.354300073 podStartE2EDuration="2m5.354300073s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.35359671 +0000 UTC m=+146.937787714" watchObservedRunningTime="2026-01-27 14:07:40.354300073 +0000 UTC m=+146.938491077" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.376577 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" podStartSLOduration=125.376559783 podStartE2EDuration="2m5.376559783s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.375126177 +0000 UTC m=+146.959317211" watchObservedRunningTime="2026-01-27 14:07:40.376559783 +0000 UTC m=+146.960750787" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.392256 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.392732 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:40.892701249 +0000 UTC m=+147.476892253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.405493 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wldws" podStartSLOduration=125.405478188 podStartE2EDuration="2m5.405478188s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.404028511 +0000 UTC m=+146.988219515" watchObservedRunningTime="2026-01-27 14:07:40.405478188 +0000 UTC m=+146.989669192" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.472781 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-698tz" podStartSLOduration=125.472762778 podStartE2EDuration="2m5.472762778s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.469321738 +0000 UTC m=+147.053512742" watchObservedRunningTime="2026-01-27 14:07:40.472762778 +0000 UTC m=+147.056953782" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.474217 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-94sjk" podStartSLOduration=126.474211094 podStartE2EDuration="2m6.474211094s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.43247457 +0000 UTC m=+147.016665574" watchObservedRunningTime="2026-01-27 14:07:40.474211094 +0000 UTC m=+147.058402098" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.477280 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-kdqlt" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.494351 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.511498 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.011481165 +0000 UTC m=+147.595672169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.539771 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-78zrr" podStartSLOduration=126.539756818 podStartE2EDuration="2m6.539756818s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.538131017 +0000 UTC m=+147.122322011" watchObservedRunningTime="2026-01-27 14:07:40.539756818 +0000 UTC m=+147.123947822" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.563109 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-knf65" podStartSLOduration=8.563093795 podStartE2EDuration="8.563093795s" podCreationTimestamp="2026-01-27 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.562306319 +0000 UTC m=+147.146497313" watchObservedRunningTime="2026-01-27 14:07:40.563093795 +0000 UTC m=+147.147284799" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.599409 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.599747 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.099731185 +0000 UTC m=+147.683922189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.606705 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-hq5wj" podStartSLOduration=125.606686317 podStartE2EDuration="2m5.606686317s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:40.584755236 +0000 UTC m=+147.168946250" watchObservedRunningTime="2026-01-27 14:07:40.606686317 +0000 UTC m=+147.190877321" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.635400 4729 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-89l7c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.635469 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" podUID="cb22ecac-005d-414f-928c-5714be9f7596" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.642813 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:40 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:40 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:40 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.642889 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.701382 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.701750 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.201735255 +0000 UTC m=+147.785926259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.766939 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2xxhk" Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.802253 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.802618 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.302598388 +0000 UTC m=+147.886789392 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:40 crc kubenswrapper[4729]: I0127 14:07:40.903952 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:40 crc kubenswrapper[4729]: E0127 14:07:40.904330 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.404312747 +0000 UTC m=+147.988503751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.006199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.006397 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.506360358 +0000 UTC m=+148.090551362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.006518 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.006975 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.506961218 +0000 UTC m=+148.091152222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.107514 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.107695 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.607665016 +0000 UTC m=+148.191856040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.107830 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.108185 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.608175222 +0000 UTC m=+148.192366286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.172325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-p8bjj" event={"ID":"670cb5da-a4e2-4685-8dff-e8963eccaab3","Type":"ContainerStarted","Data":"5e0b29cc7052b37b04d826706ea353cf5398583c3803798db44d29301af8feb3"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.174309 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" event={"ID":"f060881e-4060-4501-bcdc-b8f470d8f53e","Type":"ContainerStarted","Data":"f4c40fe983ce00fe540da12ceb5dcd38f63b3079cd4da1ec348579f3f0c6e781"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.176445 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ss7lk" event={"ID":"1b146e05-ee69-4211-b20d-5c5342a66a98","Type":"ContainerStarted","Data":"dc666f06caa0f969ff638928a5127aca812a844a29b137a363ca8d9040337896"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.178024 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-br7d9" event={"ID":"d6ec7e01-d640-484f-8b1e-37accfa6c3d2","Type":"ContainerStarted","Data":"85d3da61c6affbb86b2a6f3f0fe55c78963e8306fbb5afa6716ca68ca29eacae"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.179568 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-knf65" event={"ID":"ca2d17dd-20d7-40ed-a0d8-36fd0e037842","Type":"ContainerStarted","Data":"0252c2fbdbb8d51be6f3e15ba779e3ca6eb5e739962375028566f290b5437c72"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.180972 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" event={"ID":"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511","Type":"ContainerStarted","Data":"d4731f3d3d87abe4653825ed7f065a679667a9c211dc62600a12d50c72c5f56f"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.183481 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" event={"ID":"1b954a21-e466-4ad6-aac5-f3bcea883d63","Type":"ContainerStarted","Data":"c2a2ddb208feeffbaa0fa086e95f16ef1814e75162fa9e9863233a6ae4e4b16c"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.183559 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" event={"ID":"1b954a21-e466-4ad6-aac5-f3bcea883d63","Type":"ContainerStarted","Data":"a9d2f17e3903a172a768d72c89ba1d8314c591c5fbea068f5c4323742273372c"} Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.184075 4729 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d8548 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.184118 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.184644 4729 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-9kmmz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.184670 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" podUID="5ead3160-7f1b-4bc3-ba3c-3a5876b86e2c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.201712 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.208574 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.208960 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.708936262 +0000 UTC m=+148.293127266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.281825 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" podStartSLOduration=126.28180775 podStartE2EDuration="2m6.28180775s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:41.244237989 +0000 UTC m=+147.828428993" watchObservedRunningTime="2026-01-27 14:07:41.28180775 +0000 UTC m=+147.865998754" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.311893 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.319090 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.819074631 +0000 UTC m=+148.403265685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.327350 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" podStartSLOduration=127.327330255 podStartE2EDuration="2m7.327330255s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:41.326176238 +0000 UTC m=+147.910367252" watchObservedRunningTime="2026-01-27 14:07:41.327330255 +0000 UTC m=+147.911521279" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.419096 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.419574 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:41.919553991 +0000 UTC m=+148.503744995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.521190 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.521660 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.021637263 +0000 UTC m=+148.605828407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.622489 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.622936 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.12291846 +0000 UTC m=+148.707109464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.645074 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:41 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:41 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:41 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.645126 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.725164 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.725636 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.225621942 +0000 UTC m=+148.809812956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.826911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.827047 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.327008131 +0000 UTC m=+148.911199135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.827250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.827635 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.327625141 +0000 UTC m=+148.911816145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.857568 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" Jan 27 14:07:41 crc kubenswrapper[4729]: I0127 14:07:41.934562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:41 crc kubenswrapper[4729]: E0127 14:07:41.935801 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.435774177 +0000 UTC m=+149.019965181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.037098 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.037200 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.037231 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.037276 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.037301 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.038223 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.53820842 +0000 UTC m=+149.122399424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.038820 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.043981 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.047538 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.048710 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.077405 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.083338 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.089169 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.139129 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.139525 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.639505397 +0000 UTC m=+149.223696401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.201220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" event={"ID":"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511","Type":"ContainerStarted","Data":"2b2cc7c123a2477f12e1cc2bca5569446921a2b10c5a292a49713708010cabdd"} Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.219479 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.229460 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9kmmz" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.240545 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.240927 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.740914067 +0000 UTC m=+149.325105071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.348148 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.348414 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.848392112 +0000 UTC m=+149.432583106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.348707 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.350379 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.850370905 +0000 UTC m=+149.434561899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.457360 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.457510 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.957480298 +0000 UTC m=+149.541671302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.457614 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.458021 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:42.958010734 +0000 UTC m=+149.542201748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.558850 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.559755 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.059735255 +0000 UTC m=+149.643926259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.645449 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:42 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:42 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:42 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.645505 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.660922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.662954 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.162932252 +0000 UTC m=+149.747123326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: W0127 14:07:42.686420 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-946c19540701c95536f4a26f880f0cdba17ab3cd40efbb63907d30224ac4d17b WatchSource:0}: Error finding container 946c19540701c95536f4a26f880f0cdba17ab3cd40efbb63907d30224ac4d17b: Status 404 returned error can't find the container with id 946c19540701c95536f4a26f880f0cdba17ab3cd40efbb63907d30224ac4d17b Jan 27 14:07:42 crc kubenswrapper[4729]: W0127 14:07:42.735369 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-11389407dabd474d2d7ade2618b48c13f3e48865c5482a8da06a9900ec112978 WatchSource:0}: Error finding container 11389407dabd474d2d7ade2618b48c13f3e48865c5482a8da06a9900ec112978: Status 404 returned error can't find the container with id 11389407dabd474d2d7ade2618b48c13f3e48865c5482a8da06a9900ec112978 Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.764592 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.765477 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.265449019 +0000 UTC m=+149.849640023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.866445 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.866807 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.366792327 +0000 UTC m=+149.950983331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.968788 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.969162 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.469140227 +0000 UTC m=+150.053331231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:42 crc kubenswrapper[4729]: I0127 14:07:42.969522 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:42 crc kubenswrapper[4729]: E0127 14:07:42.970126 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.470117189 +0000 UTC m=+150.054308193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.071154 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.071566 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.571544529 +0000 UTC m=+150.155735533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.173163 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.173555 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.673543739 +0000 UTC m=+150.257734743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.232674 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" event={"ID":"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511","Type":"ContainerStarted","Data":"19d3f1263ff5690aacef867c4b5a5358f22538619bc68fcdad0cdd139637050d"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.232988 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" event={"ID":"f3c004e3-2eca-4fed-ac6b-9e1eb31fb511","Type":"ContainerStarted","Data":"9b9c539d905367cf8d75c626da96d27730cfca49bf9e071ff1984913c0ec8a8c"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.243653 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a55c78cc47a758f6c2a7c3a05d66037c0e86dcc7ad518597d03f9253fdbc52ff"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.243941 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"11389407dabd474d2d7ade2618b48c13f3e48865c5482a8da06a9900ec112978"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.245109 4729 generic.go:334] "Generic (PLEG): container finished" podID="e59601bb-7561-4555-99e9-0e6faf392716" containerID="f8d1d0b8899d63924602beb57acdf6f382a18ed9a1aac9e316e063e1a73b56b1" exitCode=0 Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.245263 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" event={"ID":"e59601bb-7561-4555-99e9-0e6faf392716","Type":"ContainerDied","Data":"f8d1d0b8899d63924602beb57acdf6f382a18ed9a1aac9e316e063e1a73b56b1"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.246506 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"da2491cd788db7754fe8aec912c73c574d55145eb34dfdf0155b65fee78d6c8e"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.246607 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"99bd8fa2945ce9ecc7b38916f22ea1e3e6e2d21b81e4477603f91c64a3293eb9"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.259774 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b1c7abb8af14124a1397efccac1c126633cd2d692b024aeacf822f4e2b70659"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.259816 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"946c19540701c95536f4a26f880f0cdba17ab3cd40efbb63907d30224ac4d17b"} Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.260371 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.274822 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.275104 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.775086084 +0000 UTC m=+150.359277088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.306455 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5q2rv" podStartSLOduration=11.306437775 podStartE2EDuration="11.306437775s" podCreationTimestamp="2026-01-27 14:07:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:43.28090632 +0000 UTC m=+149.865097324" watchObservedRunningTime="2026-01-27 14:07:43.306437775 +0000 UTC m=+149.890628779" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.333780 4729 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.377064 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.381765 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.881745342 +0000 UTC m=+150.465936436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.416707 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.417713 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.425393 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.435180 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.478708 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.479038 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:43.97901926 +0000 UTC m=+150.563210264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.580826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.580911 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.580944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.581191 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lth7\" (UniqueName: \"kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.581446 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.081433182 +0000 UTC m=+150.665624186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.603615 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.604683 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.606657 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.621421 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.644063 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:43 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:43 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:43 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.644173 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.682732 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.682923 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lth7\" (UniqueName: \"kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.682986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.683022 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.683577 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.683822 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.183798044 +0000 UTC m=+150.767989048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.683904 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.706848 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lth7\" (UniqueName: \"kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7\") pod \"certified-operators-dpvk8\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.732431 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.783959 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.784013 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxw9\" (UniqueName: \"kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.784057 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.784111 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.784483 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.28446956 +0000 UTC m=+150.868660564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.807813 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.808976 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.815969 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.892250 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.892563 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.392538224 +0000 UTC m=+150.976729228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.893494 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.893615 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.893653 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxw9\" (UniqueName: \"kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.893702 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.894033 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.894092 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: E0127 14:07:43.894358 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.394346212 +0000 UTC m=+150.978537216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.910300 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxw9\" (UniqueName: \"kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9\") pod \"community-operators-tgstt\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.919203 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:07:43 crc kubenswrapper[4729]: I0127 14:07:43.982842 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.001954 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.002262 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsm2\" (UniqueName: \"kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.002301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.002367 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: E0127 14:07:44.002482 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.502463926 +0000 UTC m=+151.086654930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.013081 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.018300 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.025731 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.103892 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.103985 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsm2\" (UniqueName: \"kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.104015 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.104080 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.104509 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: E0127 14:07:44.104815 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.604801576 +0000 UTC m=+151.188992580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.105417 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.146700 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsm2\" (UniqueName: \"kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2\") pod \"certified-operators-vkb8m\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.204699 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:44 crc kubenswrapper[4729]: E0127 14:07:44.204984 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.704957447 +0000 UTC m=+151.289148451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.205317 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.205395 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp7wd\" (UniqueName: \"kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.205450 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.205494 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: E0127 14:07:44.205837 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 14:07:44.705820884 +0000 UTC m=+151.290011898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ck87f" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.238847 4729 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T14:07:43.333811351Z","Handler":null,"Name":""} Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.243631 4729 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.243686 4729 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.269926 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerStarted","Data":"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d"} Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.269976 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerStarted","Data":"338a660165475b32a01778d7cecaf258d21f812712654caef2cbdfe012310593"} Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.273304 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.306387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.306656 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.306703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.306754 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp7wd\" (UniqueName: \"kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.307390 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.315203 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.316222 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.319731 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.326831 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp7wd\" (UniqueName: \"kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd\") pod \"community-operators-wj9pn\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: W0127 14:07:44.327934 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbbed75_f666_4324_be28_902bb6564058.slice/crio-b1801c471a77aeb58219589a8eaf13d34a6787a27a22c7040bb0e0e438cd4f64 WatchSource:0}: Error finding container b1801c471a77aeb58219589a8eaf13d34a6787a27a22c7040bb0e0e438cd4f64: Status 404 returned error can't find the container with id b1801c471a77aeb58219589a8eaf13d34a6787a27a22c7040bb0e0e438cd4f64 Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.353515 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.407710 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.428907 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.439283 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.439335 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.479533 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ck87f\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.596345 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.643127 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.648166 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:44 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:44 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:44 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.648233 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.652309 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.710840 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume\") pod \"e59601bb-7561-4555-99e9-0e6faf392716\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.710952 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume\") pod \"e59601bb-7561-4555-99e9-0e6faf392716\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.711010 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrzr\" (UniqueName: \"kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr\") pod \"e59601bb-7561-4555-99e9-0e6faf392716\" (UID: \"e59601bb-7561-4555-99e9-0e6faf392716\") " Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.711853 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume" (OuterVolumeSpecName: "config-volume") pod "e59601bb-7561-4555-99e9-0e6faf392716" (UID: "e59601bb-7561-4555-99e9-0e6faf392716"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.713412 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.717218 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr" (OuterVolumeSpecName: "kube-api-access-nkrzr") pod "e59601bb-7561-4555-99e9-0e6faf392716" (UID: "e59601bb-7561-4555-99e9-0e6faf392716"). InnerVolumeSpecName "kube-api-access-nkrzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.717750 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e59601bb-7561-4555-99e9-0e6faf392716" (UID: "e59601bb-7561-4555-99e9-0e6faf392716"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.812369 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e59601bb-7561-4555-99e9-0e6faf392716-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.812400 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrzr\" (UniqueName: \"kubernetes.io/projected/e59601bb-7561-4555-99e9-0e6faf392716-kube-api-access-nkrzr\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.812413 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e59601bb-7561-4555-99e9-0e6faf392716-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:44 crc kubenswrapper[4729]: I0127 14:07:44.847515 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:07:44 crc kubenswrapper[4729]: E0127 14:07:44.930663 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766577d6_c3bd_42aa_8fce_ed48e92c546c.slice/crio-conmon-6658b74063a213e8fb5c898475cd6d53f2a0943d1fa977728a364506c352aa28.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod766577d6_c3bd_42aa_8fce_ed48e92c546c.slice/crio-6658b74063a213e8fb5c898475cd6d53f2a0943d1fa977728a364506c352aa28.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:07:44 crc kubenswrapper[4729]: W0127 14:07:44.956825 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod495678a7_f4e5_4ada_8da6_e4d573a2b7e0.slice/crio-93a50a9e6dd0102ee1ccc7bf2f6e4069147a8da3c645b40a7ed83aee820c2da8 WatchSource:0}: Error finding container 93a50a9e6dd0102ee1ccc7bf2f6e4069147a8da3c645b40a7ed83aee820c2da8: Status 404 returned error can't find the container with id 93a50a9e6dd0102ee1ccc7bf2f6e4069147a8da3c645b40a7ed83aee820c2da8 Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.048428 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.048731 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.048686 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.048781 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.243335 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.243689 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.254717 4729 patch_prober.go:28] interesting pod/apiserver-76f77b778f-frjsp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]log ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]etcd ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/max-in-flight-filter ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 14:07:45 crc kubenswrapper[4729]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 14:07:45 crc kubenswrapper[4729]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/openshift.io-startinformers ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 14:07:45 crc kubenswrapper[4729]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 14:07:45 crc kubenswrapper[4729]: livez check failed Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.254768 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" podUID="1b954a21-e466-4ad6-aac5-f3bcea883d63" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.285385 4729 generic.go:334] "Generic (PLEG): container finished" podID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerID="4c1ae9654ca54dfd8c1abc2a92d011697886b6907159f8dd6118fb183614ccb9" exitCode=0 Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.285453 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerDied","Data":"4c1ae9654ca54dfd8c1abc2a92d011697886b6907159f8dd6118fb183614ccb9"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.285484 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerStarted","Data":"35ce5fc6b9e8049552deedb797fc270f4cc4f58fff2aa85f7b0accd027ee93d9"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.296833 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" event={"ID":"e59601bb-7561-4555-99e9-0e6faf392716","Type":"ContainerDied","Data":"89c93e2b11c63daa6fa962d406b41b63d8e8ab71ce311271f7beb00e20bf0e1f"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.296886 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c93e2b11c63daa6fa962d406b41b63d8e8ab71ce311271f7beb00e20bf0e1f" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.296901 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.298649 4729 generic.go:334] "Generic (PLEG): container finished" podID="ade27118-861e-4da6-9a5e-600cfbef607f" containerID="513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d" exitCode=0 Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.298704 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerDied","Data":"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.300153 4729 generic.go:334] "Generic (PLEG): container finished" podID="1cbbed75-f666-4324-be28-902bb6564058" containerID="e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb" exitCode=0 Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.300192 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerDied","Data":"e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.300210 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerStarted","Data":"b1801c471a77aeb58219589a8eaf13d34a6787a27a22c7040bb0e0e438cd4f64"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.307163 4729 generic.go:334] "Generic (PLEG): container finished" podID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerID="6658b74063a213e8fb5c898475cd6d53f2a0943d1fa977728a364506c352aa28" exitCode=0 Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.307256 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerDied","Data":"6658b74063a213e8fb5c898475cd6d53f2a0943d1fa977728a364506c352aa28"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.307280 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerStarted","Data":"6a5818183d7b70344fbd378c63d40b255eda8b7168395c56b310e15a14bc744b"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.315437 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" event={"ID":"495678a7-f4e5-4ada-8da6-e4d573a2b7e0","Type":"ContainerStarted","Data":"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.315488 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" event={"ID":"495678a7-f4e5-4ada-8da6-e4d573a2b7e0","Type":"ContainerStarted","Data":"93a50a9e6dd0102ee1ccc7bf2f6e4069147a8da3c645b40a7ed83aee820c2da8"} Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.315795 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.366345 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" podStartSLOduration=130.366301076 podStartE2EDuration="2m10.366301076s" podCreationTimestamp="2026-01-27 14:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:45.361969688 +0000 UTC m=+151.946160692" watchObservedRunningTime="2026-01-27 14:07:45.366301076 +0000 UTC m=+151.950492090" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.367923 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.368003 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.375033 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.597436 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:07:45 crc kubenswrapper[4729]: E0127 14:07:45.597774 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59601bb-7561-4555-99e9-0e6faf392716" containerName="collect-profiles" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.597788 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59601bb-7561-4555-99e9-0e6faf392716" containerName="collect-profiles" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.597899 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59601bb-7561-4555-99e9-0e6faf392716" containerName="collect-profiles" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.598239 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.601256 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.602057 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.605598 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.606958 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.607869 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.612760 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.639999 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.640092 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.643546 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:45 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:45 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:45 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.643595 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.723885 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvfw2\" (UniqueName: \"kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.723933 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.724157 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.724346 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.724434 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.750166 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.750300 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.753735 4729 patch_prober.go:28] interesting pod/console-f9d7485db-cg69z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.753792 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cg69z" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.825723 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.825956 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.826093 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvfw2\" (UniqueName: \"kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.826161 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.826332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.826380 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.826485 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.827056 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.848298 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvfw2\" (UniqueName: \"kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2\") pod \"redhat-marketplace-gf4lk\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.850966 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.919973 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:45 crc kubenswrapper[4729]: I0127 14:07:45.934359 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.007044 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.008524 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.014899 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.071270 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.129930 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.129993 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.130022 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns29r\" (UniqueName: \"kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: W0127 14:07:46.182216 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5eca1204_7e46_4515_9d64_ea3367f0bea3.slice/crio-87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0 WatchSource:0}: Error finding container 87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0: Status 404 returned error can't find the container with id 87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0 Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.190653 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.209761 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.231528 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.232051 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.232086 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns29r\" (UniqueName: \"kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.232483 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.232549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: W0127 14:07:46.247337 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9faf32_c248_4421_bbf6_66ec8b28dbc7.slice/crio-43d639436b196d701bf0784895a0c198335629016853d468af1048f8a3d2671f WatchSource:0}: Error finding container 43d639436b196d701bf0784895a0c198335629016853d468af1048f8a3d2671f: Status 404 returned error can't find the container with id 43d639436b196d701bf0784895a0c198335629016853d468af1048f8a3d2671f Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.252180 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns29r\" (UniqueName: \"kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r\") pod \"redhat-marketplace-km592\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.324645 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerStarted","Data":"43d639436b196d701bf0784895a0c198335629016853d468af1048f8a3d2671f"} Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.326202 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eca1204-7e46-4515-9d64-ea3367f0bea3","Type":"ContainerStarted","Data":"87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0"} Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.332703 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sbq9j" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.333378 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.606814 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.608324 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.611563 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.617026 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.643938 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:46 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:46 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:46 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.644005 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.740506 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.740568 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.740626 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2khb\" (UniqueName: \"kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.843691 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.843753 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.843917 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2khb\" (UniqueName: \"kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.845256 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.845500 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.868103 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2khb\" (UniqueName: \"kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb\") pod \"redhat-operators-2b2hj\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.898167 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:07:46 crc kubenswrapper[4729]: I0127 14:07:46.925159 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:07:46 crc kubenswrapper[4729]: W0127 14:07:46.988406 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod953193fa_cdc0_4312_8718_029d76ef8d01.slice/crio-cb0a6e38e7bb6674d834b72c12e3a50abef943f0195a48352551f8d9dcc9598e WatchSource:0}: Error finding container cb0a6e38e7bb6674d834b72c12e3a50abef943f0195a48352551f8d9dcc9598e: Status 404 returned error can't find the container with id cb0a6e38e7bb6674d834b72c12e3a50abef943f0195a48352551f8d9dcc9598e Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.012827 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.014686 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.027327 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.148125 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.148504 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxzn\" (UniqueName: \"kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.148548 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.151984 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:07:47 crc kubenswrapper[4729]: W0127 14:07:47.160965 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c60b76f_4f77_4591_9589_815de0bf6047.slice/crio-18de0809bd331ab19e67f73ee396425294fd8b4f862293eca7328c068ceeea95 WatchSource:0}: Error finding container 18de0809bd331ab19e67f73ee396425294fd8b4f862293eca7328c068ceeea95: Status 404 returned error can't find the container with id 18de0809bd331ab19e67f73ee396425294fd8b4f862293eca7328c068ceeea95 Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.249613 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.249675 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxzn\" (UniqueName: \"kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.249724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.250235 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.250787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.277800 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxzn\" (UniqueName: \"kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn\") pod \"redhat-operators-7gttv\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.335465 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.336199 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerID="27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d" exitCode=0 Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.336265 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerDied","Data":"27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d"} Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.339830 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eca1204-7e46-4515-9d64-ea3367f0bea3","Type":"ContainerStarted","Data":"b9844f4f934727c2e560145eafc256d02af7096b4160126d84067d47e174d21e"} Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.341328 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerStarted","Data":"18de0809bd331ab19e67f73ee396425294fd8b4f862293eca7328c068ceeea95"} Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.342571 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerStarted","Data":"cb0a6e38e7bb6674d834b72c12e3a50abef943f0195a48352551f8d9dcc9598e"} Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.579675 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.579639822 podStartE2EDuration="2.579639822s" podCreationTimestamp="2026-01-27 14:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:07:47.387448821 +0000 UTC m=+153.971639835" watchObservedRunningTime="2026-01-27 14:07:47.579639822 +0000 UTC m=+154.163830826" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.584753 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:07:47 crc kubenswrapper[4729]: W0127 14:07:47.642860 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb2d1a1_31be_45bc_b6b2_ac53d002dba9.slice/crio-3902ded9e07399fd11fe250e6a8a99d47a197848d6e96ccf6dc65cb3e81f3e8b WatchSource:0}: Error finding container 3902ded9e07399fd11fe250e6a8a99d47a197848d6e96ccf6dc65cb3e81f3e8b: Status 404 returned error can't find the container with id 3902ded9e07399fd11fe250e6a8a99d47a197848d6e96ccf6dc65cb3e81f3e8b Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.648665 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:47 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:47 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:47 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.648724 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.944727 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.945737 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.948161 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.949471 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 14:07:47 crc kubenswrapper[4729]: I0127 14:07:47.956335 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.064677 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.064730 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.166432 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.166492 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.168492 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.188375 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.290783 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.362503 4729 generic.go:334] "Generic (PLEG): container finished" podID="5eca1204-7e46-4515-9d64-ea3367f0bea3" containerID="b9844f4f934727c2e560145eafc256d02af7096b4160126d84067d47e174d21e" exitCode=0 Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.362637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eca1204-7e46-4515-9d64-ea3367f0bea3","Type":"ContainerDied","Data":"b9844f4f934727c2e560145eafc256d02af7096b4160126d84067d47e174d21e"} Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.366804 4729 generic.go:334] "Generic (PLEG): container finished" podID="4c60b76f-4f77-4591-9589-815de0bf6047" containerID="f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931" exitCode=0 Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.366931 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerDied","Data":"f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931"} Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.376028 4729 generic.go:334] "Generic (PLEG): container finished" podID="953193fa-cdc0-4312-8718-029d76ef8d01" containerID="9c28d1c9e04919ab3770ba9045f787898bf2c2f5bfd4fcea9cb3f8c4d40c869d" exitCode=0 Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.376138 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerDied","Data":"9c28d1c9e04919ab3770ba9045f787898bf2c2f5bfd4fcea9cb3f8c4d40c869d"} Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.380831 4729 generic.go:334] "Generic (PLEG): container finished" podID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerID="c0d9425a4d7822ca8da39235bdb08514bcf7176312fc8892d8ca6981f2609c82" exitCode=0 Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.382031 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerDied","Data":"c0d9425a4d7822ca8da39235bdb08514bcf7176312fc8892d8ca6981f2609c82"} Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.382076 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerStarted","Data":"3902ded9e07399fd11fe250e6a8a99d47a197848d6e96ccf6dc65cb3e81f3e8b"} Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.643545 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:48 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:48 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:48 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.643630 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:48 crc kubenswrapper[4729]: I0127 14:07:48.809558 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 14:07:49 crc kubenswrapper[4729]: I0127 14:07:49.649203 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:49 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:49 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:49 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:49 crc kubenswrapper[4729]: I0127 14:07:49.649947 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:50 crc kubenswrapper[4729]: I0127 14:07:50.250097 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:50 crc kubenswrapper[4729]: I0127 14:07:50.261238 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-frjsp" Jan 27 14:07:50 crc kubenswrapper[4729]: I0127 14:07:50.562053 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-knf65" Jan 27 14:07:50 crc kubenswrapper[4729]: I0127 14:07:50.642756 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:50 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:50 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:50 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:50 crc kubenswrapper[4729]: I0127 14:07:50.642820 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:51 crc kubenswrapper[4729]: I0127 14:07:51.643127 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:51 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:51 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:51 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:51 crc kubenswrapper[4729]: I0127 14:07:51.643241 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:52 crc kubenswrapper[4729]: I0127 14:07:52.651054 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:52 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:52 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:52 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:52 crc kubenswrapper[4729]: I0127 14:07:52.651170 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:52 crc kubenswrapper[4729]: I0127 14:07:52.656023 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:07:52 crc kubenswrapper[4729]: I0127 14:07:52.656096 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:07:53 crc kubenswrapper[4729]: I0127 14:07:53.646119 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:53 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:53 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:53 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:53 crc kubenswrapper[4729]: I0127 14:07:53.646429 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.644092 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:54 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:54 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:54 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.644693 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:54 crc kubenswrapper[4729]: W0127 14:07:54.692828 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0a8672d_5a93_434a_a7d4_43bd1ecb348b.slice/crio-d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc WatchSource:0}: Error finding container d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc: Status 404 returned error can't find the container with id d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.712797 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.817895 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access\") pod \"5eca1204-7e46-4515-9d64-ea3367f0bea3\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.818018 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir\") pod \"5eca1204-7e46-4515-9d64-ea3367f0bea3\" (UID: \"5eca1204-7e46-4515-9d64-ea3367f0bea3\") " Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.818149 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5eca1204-7e46-4515-9d64-ea3367f0bea3" (UID: "5eca1204-7e46-4515-9d64-ea3367f0bea3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.818386 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5eca1204-7e46-4515-9d64-ea3367f0bea3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.829158 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5eca1204-7e46-4515-9d64-ea3367f0bea3" (UID: "5eca1204-7e46-4515-9d64-ea3367f0bea3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:07:54 crc kubenswrapper[4729]: I0127 14:07:54.919538 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eca1204-7e46-4515-9d64-ea3367f0bea3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.047629 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.047690 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.047637 4729 patch_prober.go:28] interesting pod/downloads-7954f5f757-6rfvl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.047903 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6rfvl" podUID="5291212b-828e-4312-aa3a-0187772f076f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.487620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5eca1204-7e46-4515-9d64-ea3367f0bea3","Type":"ContainerDied","Data":"87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0"} Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.488135 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87bc76acd63da361b6edf32823e7a829b3bcfa2f5a2481143e6f04fe62a1d3a0" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.487668 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.490414 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0a8672d-5a93-434a-a7d4-43bd1ecb348b","Type":"ContainerStarted","Data":"d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc"} Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.645413 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:55 crc kubenswrapper[4729]: [-]has-synced failed: reason withheld Jan 27 14:07:55 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:55 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.645532 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.689430 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.689841 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" containerID="cri-o://079db0abcf08b6b51cb2847f212a902b22d17ec6953a79c00eef2fa980085c8e" gracePeriod=30 Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.702720 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.706415 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" containerID="cri-o://b0d0a843915a529a44c244061109317f3c581b3026e6fcc73701d2efd9dcd059" gracePeriod=30 Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.754211 4729 patch_prober.go:28] interesting pod/console-f9d7485db-cg69z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 14:07:55 crc kubenswrapper[4729]: I0127 14:07:55.754312 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cg69z" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.500991 4729 generic.go:334] "Generic (PLEG): container finished" podID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerID="b0d0a843915a529a44c244061109317f3c581b3026e6fcc73701d2efd9dcd059" exitCode=0 Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.501061 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" event={"ID":"a385db05-78b0-4bb7-80b9-e0089b92e40c","Type":"ContainerDied","Data":"b0d0a843915a529a44c244061109317f3c581b3026e6fcc73701d2efd9dcd059"} Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.646419 4729 patch_prober.go:28] interesting pod/router-default-5444994796-f58jq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 14:07:56 crc kubenswrapper[4729]: [+]has-synced ok Jan 27 14:07:56 crc kubenswrapper[4729]: [+]process-running ok Jan 27 14:07:56 crc kubenswrapper[4729]: healthz check failed Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.646482 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-f58jq" podUID="e18f11c9-d605-41a8-9443-214c8d6a5c85" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.760634 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:56 crc kubenswrapper[4729]: I0127 14:07:56.769273 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c06c7af2-5a87-49e1-82ce-84aa16280c72-metrics-certs\") pod \"network-metrics-daemon-thlc7\" (UID: \"c06c7af2-5a87-49e1-82ce-84aa16280c72\") " pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:57 crc kubenswrapper[4729]: I0127 14:07:57.065976 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-thlc7" Jan 27 14:07:57 crc kubenswrapper[4729]: I0127 14:07:57.511449 4729 generic.go:334] "Generic (PLEG): container finished" podID="5e3f6beb-ef04-47ba-8738-849691b10351" containerID="079db0abcf08b6b51cb2847f212a902b22d17ec6953a79c00eef2fa980085c8e" exitCode=0 Jan 27 14:07:57 crc kubenswrapper[4729]: I0127 14:07:57.511500 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" event={"ID":"5e3f6beb-ef04-47ba-8738-849691b10351","Type":"ContainerDied","Data":"079db0abcf08b6b51cb2847f212a902b22d17ec6953a79c00eef2fa980085c8e"} Jan 27 14:07:57 crc kubenswrapper[4729]: I0127 14:07:57.645066 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:07:57 crc kubenswrapper[4729]: I0127 14:07:57.655368 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-f58jq" Jan 27 14:08:03 crc kubenswrapper[4729]: I0127 14:08:03.505654 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zgq57 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:08:03 crc kubenswrapper[4729]: I0127 14:08:03.506257 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.055318 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.066869 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6rfvl" Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.530334 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dx8cq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.530786 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.763836 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:08:05 crc kubenswrapper[4729]: I0127 14:08:05.767829 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:08:12 crc kubenswrapper[4729]: I0127 14:08:12.096686 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 14:08:13 crc kubenswrapper[4729]: I0127 14:08:13.506826 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zgq57 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 27 14:08:13 crc kubenswrapper[4729]: I0127 14:08:13.506921 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 27 14:08:15 crc kubenswrapper[4729]: I0127 14:08:15.458714 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8ksr" Jan 27 14:08:15 crc kubenswrapper[4729]: I0127 14:08:15.529525 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dx8cq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 14:08:15 crc kubenswrapper[4729]: I0127 14:08:15.529573 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 14:08:22 crc kubenswrapper[4729]: I0127 14:08:22.655472 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:08:22 crc kubenswrapper[4729]: I0127 14:08:22.657212 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:08:23 crc kubenswrapper[4729]: E0127 14:08:23.550294 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 14:08:23 crc kubenswrapper[4729]: E0127 14:08:23.550536 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zxw9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tgstt_openshift-marketplace(1cbbed75-f666-4324-be28-902bb6564058): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:23 crc kubenswrapper[4729]: E0127 14:08:23.551709 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tgstt" podUID="1cbbed75-f666-4324-be28-902bb6564058" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.742506 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:08:23 crc kubenswrapper[4729]: E0127 14:08:23.742924 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eca1204-7e46-4515-9d64-ea3367f0bea3" containerName="pruner" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.742947 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eca1204-7e46-4515-9d64-ea3367f0bea3" containerName="pruner" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.743106 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eca1204-7e46-4515-9d64-ea3367f0bea3" containerName="pruner" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.743788 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.759836 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.924394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:23 crc kubenswrapper[4729]: I0127 14:08:23.925086 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.027433 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.027686 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.027836 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.051681 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.081239 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.506113 4729 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zgq57 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:08:24 crc kubenswrapper[4729]: I0127 14:08:24.506182 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:25 crc kubenswrapper[4729]: I0127 14:08:25.529265 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dx8cq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 27 14:08:25 crc kubenswrapper[4729]: I0127 14:08:25.529592 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 27 14:08:26 crc kubenswrapper[4729]: E0127 14:08:26.165616 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tgstt" podUID="1cbbed75-f666-4324-be28-902bb6564058" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.234434 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.273550 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:08:26 crc kubenswrapper[4729]: E0127 14:08:26.273757 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.273770 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.273871 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" containerName="controller-manager" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.274176 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.274247 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.359357 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxz8z\" (UniqueName: \"kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z\") pod \"5e3f6beb-ef04-47ba-8738-849691b10351\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.359455 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca\") pod \"5e3f6beb-ef04-47ba-8738-849691b10351\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.359524 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles\") pod \"5e3f6beb-ef04-47ba-8738-849691b10351\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.359563 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert\") pod \"5e3f6beb-ef04-47ba-8738-849691b10351\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.359591 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config\") pod \"5e3f6beb-ef04-47ba-8738-849691b10351\" (UID: \"5e3f6beb-ef04-47ba-8738-849691b10351\") " Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.360288 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca" (OuterVolumeSpecName: "client-ca") pod "5e3f6beb-ef04-47ba-8738-849691b10351" (UID: "5e3f6beb-ef04-47ba-8738-849691b10351"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.360509 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5e3f6beb-ef04-47ba-8738-849691b10351" (UID: "5e3f6beb-ef04-47ba-8738-849691b10351"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.360639 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config" (OuterVolumeSpecName: "config") pod "5e3f6beb-ef04-47ba-8738-849691b10351" (UID: "5e3f6beb-ef04-47ba-8738-849691b10351"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.365194 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z" (OuterVolumeSpecName: "kube-api-access-qxz8z") pod "5e3f6beb-ef04-47ba-8738-849691b10351" (UID: "5e3f6beb-ef04-47ba-8738-849691b10351"). InnerVolumeSpecName "kube-api-access-qxz8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.365451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5e3f6beb-ef04-47ba-8738-849691b10351" (UID: "5e3f6beb-ef04-47ba-8738-849691b10351"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467140 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467316 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467477 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467559 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467598 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnxm\" (UniqueName: \"kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467803 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467837 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467848 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e3f6beb-ef04-47ba-8738-849691b10351-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467858 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e3f6beb-ef04-47ba-8738-849691b10351-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.467868 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxz8z\" (UniqueName: \"kubernetes.io/projected/5e3f6beb-ef04-47ba-8738-849691b10351-kube-api-access-qxz8z\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.568486 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.568538 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.568568 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnxm\" (UniqueName: \"kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.568608 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.568633 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.569407 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.569648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.570952 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.572483 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.589412 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnxm\" (UniqueName: \"kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm\") pod \"controller-manager-75ccfb69d9-8lvwp\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:26 crc kubenswrapper[4729]: I0127 14:08:26.595390 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:27 crc kubenswrapper[4729]: I0127 14:08:27.181394 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" event={"ID":"5e3f6beb-ef04-47ba-8738-849691b10351","Type":"ContainerDied","Data":"4e3ece4aea83e4fdaab8a40c3d51dff50cfa98605e00cc21112bf7ed44ef4117"} Jan 27 14:08:27 crc kubenswrapper[4729]: I0127 14:08:27.181493 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zgq57" Jan 27 14:08:27 crc kubenswrapper[4729]: I0127 14:08:27.181739 4729 scope.go:117] "RemoveContainer" containerID="079db0abcf08b6b51cb2847f212a902b22d17ec6953a79c00eef2fa980085c8e" Jan 27 14:08:27 crc kubenswrapper[4729]: I0127 14:08:27.214482 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:08:27 crc kubenswrapper[4729]: I0127 14:08:27.220708 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zgq57"] Jan 27 14:08:28 crc kubenswrapper[4729]: I0127 14:08:28.057902 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3f6beb-ef04-47ba-8738-849691b10351" path="/var/lib/kubelet/pods/5e3f6beb-ef04-47ba-8738-849691b10351/volumes" Jan 27 14:08:28 crc kubenswrapper[4729]: I0127 14:08:28.935399 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:08:28 crc kubenswrapper[4729]: I0127 14:08:28.936601 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:28 crc kubenswrapper[4729]: I0127 14:08:28.948943 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.099975 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.100040 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.100136 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.201278 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.201410 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.201489 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.201573 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.201970 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.221225 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access\") pod \"installer-9-crc\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:29 crc kubenswrapper[4729]: I0127 14:08:29.254580 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:08:30 crc kubenswrapper[4729]: E0127 14:08:30.815270 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 14:08:30 crc kubenswrapper[4729]: E0127 14:08:30.816522 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ns29r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-km592_openshift-marketplace(953193fa-cdc0-4312-8718-029d76ef8d01): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:30 crc kubenswrapper[4729]: E0127 14:08:30.817992 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-km592" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.247758 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.247966 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lth7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dpvk8_openshift-marketplace(ade27118-861e-4da6-9a5e-600cfbef607f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.249252 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dpvk8" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.279405 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.279552 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrsm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vkb8m_openshift-marketplace(766577d6-c3bd-42aa-8fce-ed48e92c546c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.280743 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vkb8m" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.857487 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.858064 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp7wd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-wj9pn_openshift-marketplace(2a0aca0d-c2f0-4bff-87e5-872d5012bdcf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:31 crc kubenswrapper[4729]: E0127 14:08:31.859434 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-wj9pn" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" Jan 27 14:08:35 crc kubenswrapper[4729]: E0127 14:08:35.839959 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vkb8m" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" Jan 27 14:08:35 crc kubenswrapper[4729]: E0127 14:08:35.839992 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dpvk8" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" Jan 27 14:08:35 crc kubenswrapper[4729]: E0127 14:08:35.840092 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-km592" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" Jan 27 14:08:35 crc kubenswrapper[4729]: E0127 14:08:35.840129 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-wj9pn" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.885431 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.921002 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:08:35 crc kubenswrapper[4729]: E0127 14:08:35.921268 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.921279 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.921386 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.921801 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.924570 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.987965 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert\") pod \"a385db05-78b0-4bb7-80b9-e0089b92e40c\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988046 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config\") pod \"a385db05-78b0-4bb7-80b9-e0089b92e40c\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988069 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmvzn\" (UniqueName: \"kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn\") pod \"a385db05-78b0-4bb7-80b9-e0089b92e40c\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988121 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca\") pod \"a385db05-78b0-4bb7-80b9-e0089b92e40c\" (UID: \"a385db05-78b0-4bb7-80b9-e0089b92e40c\") " Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988292 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988368 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988393 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw7nr\" (UniqueName: \"kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.988418 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.989036 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config" (OuterVolumeSpecName: "config") pod "a385db05-78b0-4bb7-80b9-e0089b92e40c" (UID: "a385db05-78b0-4bb7-80b9-e0089b92e40c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.989226 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca" (OuterVolumeSpecName: "client-ca") pod "a385db05-78b0-4bb7-80b9-e0089b92e40c" (UID: "a385db05-78b0-4bb7-80b9-e0089b92e40c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:08:35 crc kubenswrapper[4729]: I0127 14:08:35.995870 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a385db05-78b0-4bb7-80b9-e0089b92e40c" (UID: "a385db05-78b0-4bb7-80b9-e0089b92e40c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.004961 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn" (OuterVolumeSpecName: "kube-api-access-rmvzn") pod "a385db05-78b0-4bb7-80b9-e0089b92e40c" (UID: "a385db05-78b0-4bb7-80b9-e0089b92e40c"). InnerVolumeSpecName "kube-api-access-rmvzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.037096 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-thlc7"] Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.089710 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.089793 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw7nr\" (UniqueName: \"kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.089843 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.089934 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.089985 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.090000 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmvzn\" (UniqueName: \"kubernetes.io/projected/a385db05-78b0-4bb7-80b9-e0089b92e40c-kube-api-access-rmvzn\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.090015 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a385db05-78b0-4bb7-80b9-e0089b92e40c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.090028 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a385db05-78b0-4bb7-80b9-e0089b92e40c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.232357 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" event={"ID":"a385db05-78b0-4bb7-80b9-e0089b92e40c","Type":"ContainerDied","Data":"0af8ce5f334aa59b664d81f707aa9a81764fb53e0e035f3a9ef932b9eb93c4bf"} Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.232417 4729 scope.go:117] "RemoveContainer" containerID="b0d0a843915a529a44c244061109317f3c581b3026e6fcc73701d2efd9dcd059" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.232427 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.254008 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.257800 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq"] Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.529298 4729 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dx8cq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.529366 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dx8cq" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.758344 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.758524 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.758832 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.761362 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw7nr\" (UniqueName: \"kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr\") pod \"route-controller-manager-59fbb49c55-74c5r\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:36 crc kubenswrapper[4729]: I0127 14:08:36.841458 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:38 crc kubenswrapper[4729]: I0127 14:08:38.058895 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a385db05-78b0-4bb7-80b9-e0089b92e40c" path="/var/lib/kubelet/pods/a385db05-78b0-4bb7-80b9-e0089b92e40c/volumes" Jan 27 14:08:38 crc kubenswrapper[4729]: E0127 14:08:38.838439 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 14:08:38 crc kubenswrapper[4729]: E0127 14:08:38.838637 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvfw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gf4lk_openshift-marketplace(fd9faf32-c248-4421-bbf6-66ec8b28dbc7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:38 crc kubenswrapper[4729]: E0127 14:08:38.840638 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gf4lk" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.070318 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gf4lk" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" Jan 27 14:08:47 crc kubenswrapper[4729]: I0127 14:08:47.273938 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 14:08:47 crc kubenswrapper[4729]: I0127 14:08:47.305666 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-thlc7" event={"ID":"c06c7af2-5a87-49e1-82ce-84aa16280c72","Type":"ContainerStarted","Data":"b7c03ddc3b46f662df25b19959bec520cb05df5f16b3eb53bb985a0914a06f94"} Jan 27 14:08:47 crc kubenswrapper[4729]: I0127 14:08:47.312496 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 14:08:47 crc kubenswrapper[4729]: W0127 14:08:47.321089 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda15745ea_afdc_409d_a3a6_cc4ea2048e03.slice/crio-4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346 WatchSource:0}: Error finding container 4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346: Status 404 returned error can't find the container with id 4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346 Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.325945 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.326043 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2khb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2b2hj_openshift-marketplace(4c60b76f-4f77-4591-9589-815de0bf6047): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.327189 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2b2hj" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.506425 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.507230 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wxzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7gttv_openshift-marketplace(6eb2d1a1-31be-45bc-b6b2-ac53d002dba9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:08:47 crc kubenswrapper[4729]: E0127 14:08:47.508500 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7gttv" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" Jan 27 14:08:47 crc kubenswrapper[4729]: I0127 14:08:47.583409 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:08:47 crc kubenswrapper[4729]: W0127 14:08:47.585745 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e15569_f897_42c7_b765_a42aec47482e.slice/crio-e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea WatchSource:0}: Error finding container e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea: Status 404 returned error can't find the container with id e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea Jan 27 14:08:47 crc kubenswrapper[4729]: W0127 14:08:47.586702 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf999fa2c_7d62_43f5_b593_385b13d5b6f2.slice/crio-7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c WatchSource:0}: Error finding container 7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c: Status 404 returned error can't find the container with id 7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c Jan 27 14:08:47 crc kubenswrapper[4729]: I0127 14:08:47.588775 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.312576 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0a8672d-5a93-434a-a7d4-43bd1ecb348b","Type":"ContainerStarted","Data":"5ce7b1df4121b94362eff2812fe8dfb88d119c1bdfa82975df5b24eb9e83b9fe"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.315600 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-thlc7" event={"ID":"c06c7af2-5a87-49e1-82ce-84aa16280c72","Type":"ContainerStarted","Data":"dac800acc48859867598f307630a9ed998773bc2b9f513acb00fbcb05736fb1e"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.315674 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-thlc7" event={"ID":"c06c7af2-5a87-49e1-82ce-84aa16280c72","Type":"ContainerStarted","Data":"6f0267ece068fe5ab1ba8d173028b8581466e8af37cd81d461305ea48558535a"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.317053 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9efef83a-a7bb-46a3-b382-e040b7804bf5","Type":"ContainerStarted","Data":"6529859d8da23fc0c5ac1d875b28824afe24d7559943206dc3cc079d983f2e96"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.317100 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9efef83a-a7bb-46a3-b382-e040b7804bf5","Type":"ContainerStarted","Data":"2dd8ccb5a549569fe04f6ad37fcfdce1ad86b65539b05d168a3fb6a4872aed27"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.318393 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" event={"ID":"f999fa2c-7d62-43f5-b593-385b13d5b6f2","Type":"ContainerStarted","Data":"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.318421 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" event={"ID":"f999fa2c-7d62-43f5-b593-385b13d5b6f2","Type":"ContainerStarted","Data":"7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.319779 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" event={"ID":"11e15569-f897-42c7-b765-a42aec47482e","Type":"ContainerStarted","Data":"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.319807 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" event={"ID":"11e15569-f897-42c7-b765-a42aec47482e","Type":"ContainerStarted","Data":"e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.321075 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a15745ea-afdc-409d-a3a6-cc4ea2048e03","Type":"ContainerStarted","Data":"3b3b8eb06e6766f09563fe491701439d4cee9b4ed702e99290186ff1325a3311"} Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.321115 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a15745ea-afdc-409d-a3a6-cc4ea2048e03","Type":"ContainerStarted","Data":"4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346"} Jan 27 14:08:48 crc kubenswrapper[4729]: E0127 14:08:48.323667 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2b2hj" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.333564 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=61.333521329 podStartE2EDuration="1m1.333521329s" podCreationTimestamp="2026-01-27 14:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:48.332002484 +0000 UTC m=+214.916193498" watchObservedRunningTime="2026-01-27 14:08:48.333521329 +0000 UTC m=+214.917712363" Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.355929 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" podStartSLOduration=33.355911498 podStartE2EDuration="33.355911498s" podCreationTimestamp="2026-01-27 14:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:48.355051492 +0000 UTC m=+214.939242496" watchObservedRunningTime="2026-01-27 14:08:48.355911498 +0000 UTC m=+214.940102522" Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.372639 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=25.372623107 podStartE2EDuration="25.372623107s" podCreationTimestamp="2026-01-27 14:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:48.372535305 +0000 UTC m=+214.956726309" watchObservedRunningTime="2026-01-27 14:08:48.372623107 +0000 UTC m=+214.956814111" Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.395587 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" podStartSLOduration=33.395566012 podStartE2EDuration="33.395566012s" podCreationTimestamp="2026-01-27 14:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:48.391385607 +0000 UTC m=+214.975576611" watchObservedRunningTime="2026-01-27 14:08:48.395566012 +0000 UTC m=+214.979757026" Jan 27 14:08:48 crc kubenswrapper[4729]: I0127 14:08:48.454521 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=20.454507373 podStartE2EDuration="20.454507373s" podCreationTimestamp="2026-01-27 14:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:48.452222035 +0000 UTC m=+215.036413039" watchObservedRunningTime="2026-01-27 14:08:48.454507373 +0000 UTC m=+215.038698377" Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.333753 4729 generic.go:334] "Generic (PLEG): container finished" podID="b0a8672d-5a93-434a-a7d4-43bd1ecb348b" containerID="5ce7b1df4121b94362eff2812fe8dfb88d119c1bdfa82975df5b24eb9e83b9fe" exitCode=0 Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.333827 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0a8672d-5a93-434a-a7d4-43bd1ecb348b","Type":"ContainerDied","Data":"5ce7b1df4121b94362eff2812fe8dfb88d119c1bdfa82975df5b24eb9e83b9fe"} Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.335603 4729 generic.go:334] "Generic (PLEG): container finished" podID="a15745ea-afdc-409d-a3a6-cc4ea2048e03" containerID="3b3b8eb06e6766f09563fe491701439d4cee9b4ed702e99290186ff1325a3311" exitCode=0 Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.336180 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a15745ea-afdc-409d-a3a6-cc4ea2048e03","Type":"ContainerDied","Data":"3b3b8eb06e6766f09563fe491701439d4cee9b4ed702e99290186ff1325a3311"} Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.336836 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.336906 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.343982 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.345746 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:08:49 crc kubenswrapper[4729]: I0127 14:08:49.406851 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-thlc7" podStartSLOduration=195.40682982 podStartE2EDuration="3m15.40682982s" podCreationTimestamp="2026-01-27 14:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:08:49.399616424 +0000 UTC m=+215.983807428" watchObservedRunningTime="2026-01-27 14:08:49.40682982 +0000 UTC m=+215.991020844" Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.850067 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.948376 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.982826 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir\") pod \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.982923 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a15745ea-afdc-409d-a3a6-cc4ea2048e03" (UID: "a15745ea-afdc-409d-a3a6-cc4ea2048e03"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.982975 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access\") pod \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\" (UID: \"a15745ea-afdc-409d-a3a6-cc4ea2048e03\") " Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.983290 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:50 crc kubenswrapper[4729]: I0127 14:08:50.988640 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a15745ea-afdc-409d-a3a6-cc4ea2048e03" (UID: "a15745ea-afdc-409d-a3a6-cc4ea2048e03"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.084798 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir\") pod \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.085313 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access\") pod \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\" (UID: \"b0a8672d-5a93-434a-a7d4-43bd1ecb348b\") " Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.085516 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a15745ea-afdc-409d-a3a6-cc4ea2048e03-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.086393 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0a8672d-5a93-434a-a7d4-43bd1ecb348b" (UID: "b0a8672d-5a93-434a-a7d4-43bd1ecb348b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.095828 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0a8672d-5a93-434a-a7d4-43bd1ecb348b" (UID: "b0a8672d-5a93-434a-a7d4-43bd1ecb348b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.187262 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.187324 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0a8672d-5a93-434a-a7d4-43bd1ecb348b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.346757 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerStarted","Data":"4e331df83dd83c61d52c2ea80b8f46f94cecf268723082ef0cdcee885474e878"} Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.348676 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerStarted","Data":"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6"} Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.350478 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a15745ea-afdc-409d-a3a6-cc4ea2048e03","Type":"ContainerDied","Data":"4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346"} Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.350531 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f3e4a5222718488e53437c050b7a12bfc27c3de5fca0705fa8f4b8571e78346" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.350658 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.351766 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b0a8672d-5a93-434a-a7d4-43bd1ecb348b","Type":"ContainerDied","Data":"d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc"} Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.351796 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d539df3853a73e373659e48c83d1188d9450b95adbb4d134ef110598f03f17fc" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.351836 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 14:08:51 crc kubenswrapper[4729]: I0127 14:08:51.354897 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerStarted","Data":"2f0b51ec696a3987e73c6710004af5f6da8fa435949223a38b40d1c74b33567b"} Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.362311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerStarted","Data":"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5"} Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.364615 4729 generic.go:334] "Generic (PLEG): container finished" podID="953193fa-cdc0-4312-8718-029d76ef8d01" containerID="4e331df83dd83c61d52c2ea80b8f46f94cecf268723082ef0cdcee885474e878" exitCode=0 Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.364670 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerDied","Data":"4e331df83dd83c61d52c2ea80b8f46f94cecf268723082ef0cdcee885474e878"} Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.366540 4729 generic.go:334] "Generic (PLEG): container finished" podID="1cbbed75-f666-4324-be28-902bb6564058" containerID="7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6" exitCode=0 Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.366597 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerDied","Data":"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6"} Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.371244 4729 generic.go:334] "Generic (PLEG): container finished" podID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerID="2f0b51ec696a3987e73c6710004af5f6da8fa435949223a38b40d1c74b33567b" exitCode=0 Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.371277 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerDied","Data":"2f0b51ec696a3987e73c6710004af5f6da8fa435949223a38b40d1c74b33567b"} Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.655329 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.655711 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.655840 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.656575 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:08:52 crc kubenswrapper[4729]: I0127 14:08:52.656806 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa" gracePeriod=600 Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.380254 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa" exitCode=0 Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.380386 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa"} Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.381241 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0"} Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.383551 4729 generic.go:334] "Generic (PLEG): container finished" podID="ade27118-861e-4da6-9a5e-600cfbef607f" containerID="bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5" exitCode=0 Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.383596 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerDied","Data":"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5"} Jan 27 14:08:53 crc kubenswrapper[4729]: I0127 14:08:53.386910 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerStarted","Data":"661a87e3e6d7b9cdb5cfe83ede2c42c17328b9c33c536e6b6ed6d9f6ddb97c01"} Jan 27 14:08:54 crc kubenswrapper[4729]: I0127 14:08:54.398783 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerStarted","Data":"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260"} Jan 27 14:08:54 crc kubenswrapper[4729]: I0127 14:08:54.422143 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-km592" podStartSLOduration=4.7002947 podStartE2EDuration="1m9.422120703s" podCreationTimestamp="2026-01-27 14:07:45 +0000 UTC" firstStartedPulling="2026-01-27 14:07:48.382182486 +0000 UTC m=+154.966373490" lastFinishedPulling="2026-01-27 14:08:53.104008489 +0000 UTC m=+219.688199493" observedRunningTime="2026-01-27 14:08:54.416768814 +0000 UTC m=+221.000959878" watchObservedRunningTime="2026-01-27 14:08:54.422120703 +0000 UTC m=+221.006311747" Jan 27 14:08:54 crc kubenswrapper[4729]: I0127 14:08:54.443605 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tgstt" podStartSLOduration=3.084833566 podStartE2EDuration="1m11.443582465s" podCreationTimestamp="2026-01-27 14:07:43 +0000 UTC" firstStartedPulling="2026-01-27 14:07:45.30131413 +0000 UTC m=+151.885505134" lastFinishedPulling="2026-01-27 14:08:53.660063019 +0000 UTC m=+220.244254033" observedRunningTime="2026-01-27 14:08:54.440137441 +0000 UTC m=+221.024328485" watchObservedRunningTime="2026-01-27 14:08:54.443582465 +0000 UTC m=+221.027773479" Jan 27 14:08:55 crc kubenswrapper[4729]: I0127 14:08:55.404279 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerStarted","Data":"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4"} Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.334114 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.334189 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.416721 4729 generic.go:334] "Generic (PLEG): container finished" podID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerID="8c97cb0536a604a384369b8a027048c45f33ce2ccc846f2fbce1fd94944c685f" exitCode=0 Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.416837 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerDied","Data":"8c97cb0536a604a384369b8a027048c45f33ce2ccc846f2fbce1fd94944c685f"} Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.452117 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpvk8" podStartSLOduration=2.810294902 podStartE2EDuration="1m13.452099472s" podCreationTimestamp="2026-01-27 14:07:43 +0000 UTC" firstStartedPulling="2026-01-27 14:07:44.272907758 +0000 UTC m=+150.857098762" lastFinishedPulling="2026-01-27 14:08:54.914712328 +0000 UTC m=+221.498903332" observedRunningTime="2026-01-27 14:08:56.449384741 +0000 UTC m=+223.033575745" watchObservedRunningTime="2026-01-27 14:08:56.452099472 +0000 UTC m=+223.036290476" Jan 27 14:08:56 crc kubenswrapper[4729]: I0127 14:08:56.780156 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:08:59 crc kubenswrapper[4729]: I0127 14:08:59.434329 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerStarted","Data":"1e3d3fbd786183988f84c074d76345adf3eb676f340bd7e71922c25849c42214"} Jan 27 14:08:59 crc kubenswrapper[4729]: I0127 14:08:59.436341 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerStarted","Data":"31709b86865ec89129ae4b031c460573951c2a316bc8a73ed734057408486e0c"} Jan 27 14:08:59 crc kubenswrapper[4729]: I0127 14:08:59.456046 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkb8m" podStartSLOduration=2.830271216 podStartE2EDuration="1m16.456024584s" podCreationTimestamp="2026-01-27 14:07:43 +0000 UTC" firstStartedPulling="2026-01-27 14:07:45.313233671 +0000 UTC m=+151.897424675" lastFinishedPulling="2026-01-27 14:08:58.938987039 +0000 UTC m=+225.523178043" observedRunningTime="2026-01-27 14:08:59.454279522 +0000 UTC m=+226.038470536" watchObservedRunningTime="2026-01-27 14:08:59.456024584 +0000 UTC m=+226.040215598" Jan 27 14:08:59 crc kubenswrapper[4729]: I0127 14:08:59.469797 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wj9pn" podStartSLOduration=2.724030998 podStartE2EDuration="1m16.469777555s" podCreationTimestamp="2026-01-27 14:07:43 +0000 UTC" firstStartedPulling="2026-01-27 14:07:45.287327053 +0000 UTC m=+151.871518057" lastFinishedPulling="2026-01-27 14:08:59.03307362 +0000 UTC m=+225.617264614" observedRunningTime="2026-01-27 14:08:59.468653732 +0000 UTC m=+226.052844736" watchObservedRunningTime="2026-01-27 14:08:59.469777555 +0000 UTC m=+226.053968559" Jan 27 14:09:00 crc kubenswrapper[4729]: I0127 14:09:00.442931 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerID="bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941" exitCode=0 Jan 27 14:09:00 crc kubenswrapper[4729]: I0127 14:09:00.443024 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerDied","Data":"bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941"} Jan 27 14:09:01 crc kubenswrapper[4729]: I0127 14:09:01.449217 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerStarted","Data":"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf"} Jan 27 14:09:01 crc kubenswrapper[4729]: I0127 14:09:01.491229 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gf4lk" podStartSLOduration=2.833960585 podStartE2EDuration="1m16.491195938s" podCreationTimestamp="2026-01-27 14:07:45 +0000 UTC" firstStartedPulling="2026-01-27 14:07:47.346971337 +0000 UTC m=+153.931162341" lastFinishedPulling="2026-01-27 14:09:01.00420669 +0000 UTC m=+227.588397694" observedRunningTime="2026-01-27 14:09:01.486274221 +0000 UTC m=+228.070465225" watchObservedRunningTime="2026-01-27 14:09:01.491195938 +0000 UTC m=+228.075386942" Jan 27 14:09:02 crc kubenswrapper[4729]: I0127 14:09:02.473305 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerStarted","Data":"bb7f800e512ce47ca4c604ce176f180c48df54f9a8697a69da83605e1cc824f2"} Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.733871 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.734308 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.896778 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.920423 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.921778 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:09:03 crc kubenswrapper[4729]: I0127 14:09:03.991522 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.354566 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.354851 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.411641 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.429779 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.430380 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.467404 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.492122 4729 generic.go:334] "Generic (PLEG): container finished" podID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerID="bb7f800e512ce47ca4c604ce176f180c48df54f9a8697a69da83605e1cc824f2" exitCode=0 Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.493118 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerDied","Data":"bb7f800e512ce47ca4c604ce176f180c48df54f9a8697a69da83605e1cc824f2"} Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.543732 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.558224 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.559460 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:04 crc kubenswrapper[4729]: I0127 14:09:04.571189 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:09:05 crc kubenswrapper[4729]: I0127 14:09:05.935088 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:09:05 crc kubenswrapper[4729]: I0127 14:09:05.935411 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:09:05 crc kubenswrapper[4729]: I0127 14:09:05.982573 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:09:06 crc kubenswrapper[4729]: I0127 14:09:06.381746 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:09:06 crc kubenswrapper[4729]: I0127 14:09:06.537031 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:09:07 crc kubenswrapper[4729]: I0127 14:09:07.830208 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:09:07 crc kubenswrapper[4729]: I0127 14:09:07.830450 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wj9pn" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="registry-server" containerID="cri-o://31709b86865ec89129ae4b031c460573951c2a316bc8a73ed734057408486e0c" gracePeriod=2 Jan 27 14:09:08 crc kubenswrapper[4729]: I0127 14:09:08.832350 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:09:08 crc kubenswrapper[4729]: I0127 14:09:08.832606 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkb8m" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="registry-server" containerID="cri-o://1e3d3fbd786183988f84c074d76345adf3eb676f340bd7e71922c25849c42214" gracePeriod=2 Jan 27 14:09:09 crc kubenswrapper[4729]: I0127 14:09:09.520157 4729 generic.go:334] "Generic (PLEG): container finished" podID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerID="31709b86865ec89129ae4b031c460573951c2a316bc8a73ed734057408486e0c" exitCode=0 Jan 27 14:09:09 crc kubenswrapper[4729]: I0127 14:09:09.520218 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerDied","Data":"31709b86865ec89129ae4b031c460573951c2a316bc8a73ed734057408486e0c"} Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.226855 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.227450 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-km592" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="registry-server" containerID="cri-o://661a87e3e6d7b9cdb5cfe83ede2c42c17328b9c33c536e6b6ed6d9f6ddb97c01" gracePeriod=2 Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.333979 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.347346 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities\") pod \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.347405 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content\") pod \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.347475 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp7wd\" (UniqueName: \"kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd\") pod \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\" (UID: \"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf\") " Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.348111 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities" (OuterVolumeSpecName: "utilities") pod "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" (UID: "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.355245 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd" (OuterVolumeSpecName: "kube-api-access-jp7wd") pod "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" (UID: "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf"). InnerVolumeSpecName "kube-api-access-jp7wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.449280 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp7wd\" (UniqueName: \"kubernetes.io/projected/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-kube-api-access-jp7wd\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.449333 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.527621 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wj9pn" event={"ID":"2a0aca0d-c2f0-4bff-87e5-872d5012bdcf","Type":"ContainerDied","Data":"35ce5fc6b9e8049552deedb797fc270f4cc4f58fff2aa85f7b0accd027ee93d9"} Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.527696 4729 scope.go:117] "RemoveContainer" containerID="31709b86865ec89129ae4b031c460573951c2a316bc8a73ed734057408486e0c" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.527829 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wj9pn" Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.534264 4729 generic.go:334] "Generic (PLEG): container finished" podID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerID="1e3d3fbd786183988f84c074d76345adf3eb676f340bd7e71922c25849c42214" exitCode=0 Jan 27 14:09:10 crc kubenswrapper[4729]: I0127 14:09:10.534317 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerDied","Data":"1e3d3fbd786183988f84c074d76345adf3eb676f340bd7e71922c25849c42214"} Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.071273 4729 scope.go:117] "RemoveContainer" containerID="2f0b51ec696a3987e73c6710004af5f6da8fa435949223a38b40d1c74b33567b" Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.116651 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" (UID: "2a0aca0d-c2f0-4bff-87e5-872d5012bdcf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.158278 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.217442 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.222932 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wj9pn"] Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.544400 4729 generic.go:334] "Generic (PLEG): container finished" podID="953193fa-cdc0-4312-8718-029d76ef8d01" containerID="661a87e3e6d7b9cdb5cfe83ede2c42c17328b9c33c536e6b6ed6d9f6ddb97c01" exitCode=0 Jan 27 14:09:11 crc kubenswrapper[4729]: I0127 14:09:11.544447 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerDied","Data":"661a87e3e6d7b9cdb5cfe83ede2c42c17328b9c33c536e6b6ed6d9f6ddb97c01"} Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.044736 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.049685 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.056030 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" path="/var/lib/kubelet/pods/2a0aca0d-c2f0-4bff-87e5-872d5012bdcf/volumes" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071138 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrsm2\" (UniqueName: \"kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2\") pod \"766577d6-c3bd-42aa-8fce-ed48e92c546c\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071224 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities\") pod \"766577d6-c3bd-42aa-8fce-ed48e92c546c\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071268 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content\") pod \"953193fa-cdc0-4312-8718-029d76ef8d01\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071292 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities\") pod \"953193fa-cdc0-4312-8718-029d76ef8d01\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071350 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns29r\" (UniqueName: \"kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r\") pod \"953193fa-cdc0-4312-8718-029d76ef8d01\" (UID: \"953193fa-cdc0-4312-8718-029d76ef8d01\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.071376 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content\") pod \"766577d6-c3bd-42aa-8fce-ed48e92c546c\" (UID: \"766577d6-c3bd-42aa-8fce-ed48e92c546c\") " Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.072148 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities" (OuterVolumeSpecName: "utilities") pod "766577d6-c3bd-42aa-8fce-ed48e92c546c" (UID: "766577d6-c3bd-42aa-8fce-ed48e92c546c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.072931 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities" (OuterVolumeSpecName: "utilities") pod "953193fa-cdc0-4312-8718-029d76ef8d01" (UID: "953193fa-cdc0-4312-8718-029d76ef8d01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.076156 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r" (OuterVolumeSpecName: "kube-api-access-ns29r") pod "953193fa-cdc0-4312-8718-029d76ef8d01" (UID: "953193fa-cdc0-4312-8718-029d76ef8d01"). InnerVolumeSpecName "kube-api-access-ns29r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.076599 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2" (OuterVolumeSpecName: "kube-api-access-rrsm2") pod "766577d6-c3bd-42aa-8fce-ed48e92c546c" (UID: "766577d6-c3bd-42aa-8fce-ed48e92c546c"). InnerVolumeSpecName "kube-api-access-rrsm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.099574 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "953193fa-cdc0-4312-8718-029d76ef8d01" (UID: "953193fa-cdc0-4312-8718-029d76ef8d01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.165922 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "766577d6-c3bd-42aa-8fce-ed48e92c546c" (UID: "766577d6-c3bd-42aa-8fce-ed48e92c546c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173431 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173470 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173481 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/953193fa-cdc0-4312-8718-029d76ef8d01-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173492 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns29r\" (UniqueName: \"kubernetes.io/projected/953193fa-cdc0-4312-8718-029d76ef8d01-kube-api-access-ns29r\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173500 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/766577d6-c3bd-42aa-8fce-ed48e92c546c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.173508 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrsm2\" (UniqueName: \"kubernetes.io/projected/766577d6-c3bd-42aa-8fce-ed48e92c546c-kube-api-access-rrsm2\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.393934 4729 scope.go:117] "RemoveContainer" containerID="4c1ae9654ca54dfd8c1abc2a92d011697886b6907159f8dd6118fb183614ccb9" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.555451 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-km592" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.556990 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-km592" event={"ID":"953193fa-cdc0-4312-8718-029d76ef8d01","Type":"ContainerDied","Data":"cb0a6e38e7bb6674d834b72c12e3a50abef943f0195a48352551f8d9dcc9598e"} Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.560369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkb8m" event={"ID":"766577d6-c3bd-42aa-8fce-ed48e92c546c","Type":"ContainerDied","Data":"6a5818183d7b70344fbd378c63d40b255eda8b7168395c56b310e15a14bc744b"} Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.560467 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkb8m" Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.590344 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.598544 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkb8m"] Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.610953 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:09:12 crc kubenswrapper[4729]: I0127 14:09:12.612826 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-km592"] Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.315141 4729 scope.go:117] "RemoveContainer" containerID="661a87e3e6d7b9cdb5cfe83ede2c42c17328b9c33c536e6b6ed6d9f6ddb97c01" Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.342913 4729 scope.go:117] "RemoveContainer" containerID="4e331df83dd83c61d52c2ea80b8f46f94cecf268723082ef0cdcee885474e878" Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.360840 4729 scope.go:117] "RemoveContainer" containerID="9c28d1c9e04919ab3770ba9045f787898bf2c2f5bfd4fcea9cb3f8c4d40c869d" Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.379464 4729 scope.go:117] "RemoveContainer" containerID="1e3d3fbd786183988f84c074d76345adf3eb676f340bd7e71922c25849c42214" Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.397230 4729 scope.go:117] "RemoveContainer" containerID="8c97cb0536a604a384369b8a027048c45f33ce2ccc846f2fbce1fd94944c685f" Jan 27 14:09:13 crc kubenswrapper[4729]: I0127 14:09:13.410871 4729 scope.go:117] "RemoveContainer" containerID="6658b74063a213e8fb5c898475cd6d53f2a0943d1fa977728a364506c352aa28" Jan 27 14:09:14 crc kubenswrapper[4729]: I0127 14:09:14.057665 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" path="/var/lib/kubelet/pods/766577d6-c3bd-42aa-8fce-ed48e92c546c/volumes" Jan 27 14:09:14 crc kubenswrapper[4729]: I0127 14:09:14.059072 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" path="/var/lib/kubelet/pods/953193fa-cdc0-4312-8718-029d76ef8d01/volumes" Jan 27 14:09:14 crc kubenswrapper[4729]: I0127 14:09:14.592771 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerStarted","Data":"c1392bfc9a308a3edb08f3684e12a5f214428669657e1fc9b5bdabff6b670cc8"} Jan 27 14:09:14 crc kubenswrapper[4729]: I0127 14:09:14.594637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerStarted","Data":"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3"} Jan 27 14:09:14 crc kubenswrapper[4729]: I0127 14:09:14.639576 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7gttv" podStartSLOduration=8.37952182 podStartE2EDuration="1m28.639560218s" podCreationTimestamp="2026-01-27 14:07:46 +0000 UTC" firstStartedPulling="2026-01-27 14:07:53.055615662 +0000 UTC m=+159.639806666" lastFinishedPulling="2026-01-27 14:09:13.31565405 +0000 UTC m=+239.899845064" observedRunningTime="2026-01-27 14:09:14.623748106 +0000 UTC m=+241.207939110" watchObservedRunningTime="2026-01-27 14:09:14.639560218 +0000 UTC m=+241.223751222" Jan 27 14:09:15 crc kubenswrapper[4729]: I0127 14:09:15.603230 4729 generic.go:334] "Generic (PLEG): container finished" podID="4c60b76f-4f77-4591-9589-815de0bf6047" containerID="b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3" exitCode=0 Jan 27 14:09:15 crc kubenswrapper[4729]: I0127 14:09:15.603322 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerDied","Data":"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3"} Jan 27 14:09:17 crc kubenswrapper[4729]: I0127 14:09:17.336717 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:09:17 crc kubenswrapper[4729]: I0127 14:09:17.340035 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:09:18 crc kubenswrapper[4729]: I0127 14:09:18.382217 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7gttv" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="registry-server" probeResult="failure" output=< Jan 27 14:09:18 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:09:18 crc kubenswrapper[4729]: > Jan 27 14:09:24 crc kubenswrapper[4729]: I0127 14:09:24.653603 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerStarted","Data":"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3"} Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.466542 4729 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467394 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467425 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467441 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467454 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467473 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467486 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467507 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467520 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467538 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467549 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467570 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15745ea-afdc-409d-a3a6-cc4ea2048e03" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467584 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15745ea-afdc-409d-a3a6-cc4ea2048e03" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467605 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467617 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="extract-content" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467634 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467646 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467668 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a8672d-5a93-434a-a7d4-43bd1ecb348b" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467680 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a8672d-5a93-434a-a7d4-43bd1ecb348b" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467711 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467727 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="extract-utilities" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.467744 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.467759 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468020 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a8672d-5a93-434a-a7d4-43bd1ecb348b" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468049 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15745ea-afdc-409d-a3a6-cc4ea2048e03" containerName="pruner" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468070 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0aca0d-c2f0-4bff-87e5-872d5012bdcf" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468097 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="953193fa-cdc0-4312-8718-029d76ef8d01" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468124 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="766577d6-c3bd-42aa-8fce-ed48e92c546c" containerName="registry-server" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468711 4729 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.468752 4729 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469123 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469155 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469168 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469174 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469182 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469189 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469197 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469202 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469213 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469219 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469232 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469238 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.469247 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469252 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469354 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469366 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469325 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d" gracePeriod=15 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469368 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9" gracePeriod=15 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469404 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d" gracePeriod=15 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469375 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469509 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b" gracePeriod=15 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469542 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7" gracePeriod=15 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469549 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469567 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469583 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.469606 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.470015 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.470039 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.470170 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.480481 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.513889 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2b2hj" podStartSLOduration=3.987920307 podStartE2EDuration="1m39.51385737s" podCreationTimestamp="2026-01-27 14:07:46 +0000 UTC" firstStartedPulling="2026-01-27 14:07:48.370137042 +0000 UTC m=+154.954328046" lastFinishedPulling="2026-01-27 14:09:23.896074105 +0000 UTC m=+250.480265109" observedRunningTime="2026-01-27 14:09:24.672182358 +0000 UTC m=+251.256373372" watchObservedRunningTime="2026-01-27 14:09:25.51385737 +0000 UTC m=+252.098048384" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.515232 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573447 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573495 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573517 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573534 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573556 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573578 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573859 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.573916 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.660154 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.662522 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.663135 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9" exitCode=0 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.663165 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7" exitCode=2 Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675026 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675097 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675131 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675155 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675189 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675215 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675259 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675281 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675297 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675375 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675428 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675449 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675480 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675500 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675513 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.675548 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: I0127 14:09:25.816848 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:09:25 crc kubenswrapper[4729]: W0127 14:09:25.956939 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-8c21705212f4c72d8cbbb209e5e66cb7165c10cb02adb5046ca18fb02162e75b WatchSource:0}: Error finding container 8c21705212f4c72d8cbbb209e5e66cb7165c10cb02adb5046ca18fb02162e75b: Status 404 returned error can't find the container with id 8c21705212f4c72d8cbbb209e5e66cb7165c10cb02adb5046ca18fb02162e75b Jan 27 14:09:25 crc kubenswrapper[4729]: E0127 14:09:25.960475 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.171:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9bc31fedf54e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:09:25.959800142 +0000 UTC m=+252.543991146,LastTimestamp:2026-01-27 14:09:25.959800142 +0000 UTC m=+252.543991146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.669632 4729 generic.go:334] "Generic (PLEG): container finished" podID="9efef83a-a7bb-46a3-b382-e040b7804bf5" containerID="6529859d8da23fc0c5ac1d875b28824afe24d7559943206dc3cc079d983f2e96" exitCode=0 Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.669711 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9efef83a-a7bb-46a3-b382-e040b7804bf5","Type":"ContainerDied","Data":"6529859d8da23fc0c5ac1d875b28824afe24d7559943206dc3cc079d983f2e96"} Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.670611 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.671001 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.671782 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"215c9f2769489d5b56991c81484981f4f5b493e0e884781598d44e405d019aee"} Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.671824 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8c21705212f4c72d8cbbb209e5e66cb7165c10cb02adb5046ca18fb02162e75b"} Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.672180 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.672526 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.673775 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.674987 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.675662 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b" exitCode=0 Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.675764 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d" exitCode=0 Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.675781 4729 scope.go:117] "RemoveContainer" containerID="4950f03580fc685fba0eb68d58dc94531c555f9743df24e19bc3c4dee49a4a4c" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.925830 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:09:26 crc kubenswrapper[4729]: I0127 14:09:26.926582 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.376420 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.377418 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.377652 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.377816 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.411707 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.412325 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.412632 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.412958 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.684092 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.866468 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.868105 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.869041 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.869345 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.869735 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.870016 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.906972 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907377 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907094 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907432 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907446 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907546 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907682 4729 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907696 4729 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.907707 4729 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.962449 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2b2hj" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="registry-server" probeResult="failure" output=< Jan 27 14:09:27 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:09:27 crc kubenswrapper[4729]: > Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.976514 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.977057 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.977324 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.977529 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:27 crc kubenswrapper[4729]: I0127 14:09:27.977723 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.057257 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.110493 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access\") pod \"9efef83a-a7bb-46a3-b382-e040b7804bf5\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.110605 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir\") pod \"9efef83a-a7bb-46a3-b382-e040b7804bf5\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.110633 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock\") pod \"9efef83a-a7bb-46a3-b382-e040b7804bf5\" (UID: \"9efef83a-a7bb-46a3-b382-e040b7804bf5\") " Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.110891 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock" (OuterVolumeSpecName: "var-lock") pod "9efef83a-a7bb-46a3-b382-e040b7804bf5" (UID: "9efef83a-a7bb-46a3-b382-e040b7804bf5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.110932 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9efef83a-a7bb-46a3-b382-e040b7804bf5" (UID: "9efef83a-a7bb-46a3-b382-e040b7804bf5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.123517 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9efef83a-a7bb-46a3-b382-e040b7804bf5" (UID: "9efef83a-a7bb-46a3-b382-e040b7804bf5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.212057 4729 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.212107 4729 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9efef83a-a7bb-46a3-b382-e040b7804bf5-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.212123 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9efef83a-a7bb-46a3-b382-e040b7804bf5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.695281 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.696542 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d" exitCode=0 Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.696662 4729 scope.go:117] "RemoveContainer" containerID="3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.697212 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.698098 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699134 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699578 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699666 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9efef83a-a7bb-46a3-b382-e040b7804bf5","Type":"ContainerDied","Data":"2dd8ccb5a549569fe04f6ad37fcfdce1ad86b65539b05d168a3fb6a4872aed27"} Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699702 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dd8ccb5a549569fe04f6ad37fcfdce1ad86b65539b05d168a3fb6a4872aed27" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699779 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.699845 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.701951 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.702293 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.702629 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.702947 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.720549 4729 scope.go:117] "RemoveContainer" containerID="88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.721213 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.721783 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.722245 4729 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.722570 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.739054 4729 scope.go:117] "RemoveContainer" containerID="8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.753841 4729 scope.go:117] "RemoveContainer" containerID="3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.777097 4729 scope.go:117] "RemoveContainer" containerID="eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.800100 4729 scope.go:117] "RemoveContainer" containerID="2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.824369 4729 scope.go:117] "RemoveContainer" containerID="3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.825501 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\": container with ID starting with 3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b not found: ID does not exist" containerID="3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.825556 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b"} err="failed to get container status \"3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\": rpc error: code = NotFound desc = could not find container \"3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b\": container with ID starting with 3f36efd24be32b980394a9b613a17c1be1a194ce6be71aeab2ff93ba04ba949b not found: ID does not exist" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.825639 4729 scope.go:117] "RemoveContainer" containerID="88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.827213 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\": container with ID starting with 88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9 not found: ID does not exist" containerID="88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.827266 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9"} err="failed to get container status \"88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\": rpc error: code = NotFound desc = could not find container \"88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9\": container with ID starting with 88e9c693751a76ee99f29b60b88117e51f808eb6547ff844aa01e76b6726f9f9 not found: ID does not exist" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.827294 4729 scope.go:117] "RemoveContainer" containerID="8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.830121 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\": container with ID starting with 8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d not found: ID does not exist" containerID="8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.830177 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d"} err="failed to get container status \"8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\": rpc error: code = NotFound desc = could not find container \"8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d\": container with ID starting with 8ff5138364f4a5b8df5fcf17dd7c6cd00d86da92c05181b20067bcc14f91938d not found: ID does not exist" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.830224 4729 scope.go:117] "RemoveContainer" containerID="3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.830841 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\": container with ID starting with 3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7 not found: ID does not exist" containerID="3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.830907 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7"} err="failed to get container status \"3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\": rpc error: code = NotFound desc = could not find container \"3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7\": container with ID starting with 3fb93f2b0bd24f04adb2922fc19b0bc86a28221555e235d50bbe14b0ef66e7e7 not found: ID does not exist" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.830940 4729 scope.go:117] "RemoveContainer" containerID="eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.831842 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\": container with ID starting with eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d not found: ID does not exist" containerID="eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.831917 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d"} err="failed to get container status \"eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\": rpc error: code = NotFound desc = could not find container \"eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d\": container with ID starting with eee5f682b643674399418280db196f2772e4088567caeba75c93c7c373bc1c2d not found: ID does not exist" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.831942 4729 scope.go:117] "RemoveContainer" containerID="2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f" Jan 27 14:09:28 crc kubenswrapper[4729]: E0127 14:09:28.832201 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\": container with ID starting with 2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f not found: ID does not exist" containerID="2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f" Jan 27 14:09:28 crc kubenswrapper[4729]: I0127 14:09:28.832237 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f"} err="failed to get container status \"2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\": rpc error: code = NotFound desc = could not find container \"2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f\": container with ID starting with 2eb38275a698eacb99bf3ca43709d50a5f073d218df63c221354cd716bf05d5f not found: ID does not exist" Jan 27 14:09:30 crc kubenswrapper[4729]: E0127 14:09:30.113786 4729 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.171:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" volumeName="registry-storage" Jan 27 14:09:31 crc kubenswrapper[4729]: E0127 14:09:31.175282 4729 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.171:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e9bc31fedf54e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 14:09:25.959800142 +0000 UTC m=+252.543991146,LastTimestamp:2026-01-27 14:09:25.959800142 +0000 UTC m=+252.543991146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 14:09:34 crc kubenswrapper[4729]: I0127 14:09:34.053418 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: I0127 14:09:34.054009 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: I0127 14:09:34.054328 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.751574 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.752038 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.752483 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.752847 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.753119 4729 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:34 crc kubenswrapper[4729]: I0127 14:09:34.753186 4729 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.753467 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="200ms" Jan 27 14:09:34 crc kubenswrapper[4729]: E0127 14:09:34.954127 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="400ms" Jan 27 14:09:35 crc kubenswrapper[4729]: E0127 14:09:35.355584 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="800ms" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.050148 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.050778 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.051193 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.051735 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.064897 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.064942 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:36 crc kubenswrapper[4729]: E0127 14:09:36.065440 4729 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.065956 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:36 crc kubenswrapper[4729]: W0127 14:09:36.094546 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-3fab272e8d41f81d72d8eda68b685f5440b21ef1ed9ec94a15396ed9ea5a8e84 WatchSource:0}: Error finding container 3fab272e8d41f81d72d8eda68b685f5440b21ef1ed9ec94a15396ed9ea5a8e84: Status 404 returned error can't find the container with id 3fab272e8d41f81d72d8eda68b685f5440b21ef1ed9ec94a15396ed9ea5a8e84 Jan 27 14:09:36 crc kubenswrapper[4729]: E0127 14:09:36.156681 4729 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.171:6443: connect: connection refused" interval="1.6s" Jan 27 14:09:36 crc kubenswrapper[4729]: E0127 14:09:36.410569 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-conmon-941609d430d0cdadfa90d89e5e8ceb3911222b3412b0fcdc9fb7f4222aab13d5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-941609d430d0cdadfa90d89e5e8ceb3911222b3412b0fcdc9fb7f4222aab13d5.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.756280 4729 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="941609d430d0cdadfa90d89e5e8ceb3911222b3412b0fcdc9fb7f4222aab13d5" exitCode=0 Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.756334 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"941609d430d0cdadfa90d89e5e8ceb3911222b3412b0fcdc9fb7f4222aab13d5"} Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.756359 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3fab272e8d41f81d72d8eda68b685f5440b21ef1ed9ec94a15396ed9ea5a8e84"} Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.756575 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.756586 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:36 crc kubenswrapper[4729]: E0127 14:09:36.756959 4729 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.757249 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.757584 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.757968 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.964551 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.965180 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.965582 4729 status_manager.go:851] "Failed to get status for pod" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" pod="openshift-marketplace/redhat-operators-2b2hj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2b2hj\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.965849 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:36 crc kubenswrapper[4729]: I0127 14:09:36.966099 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.001505 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.002286 4729 status_manager.go:851] "Failed to get status for pod" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.002821 4729 status_manager.go:851] "Failed to get status for pod" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" pod="openshift-marketplace/redhat-operators-7gttv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7gttv\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.003547 4729 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.003818 4729 status_manager.go:851] "Failed to get status for pod" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" pod="openshift-marketplace/redhat-operators-2b2hj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2b2hj\": dial tcp 38.129.56.171:6443: connect: connection refused" Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.763025 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0265dd8407e39edc4c101a3e0dab5314a6a7f845fce9bab4ff1a53bb094dd547"} Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.763337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef62c00baba4511fc2fc6ea055c614abf49f880261bfdf2f32c55fb06db49323"} Jan 27 14:09:37 crc kubenswrapper[4729]: I0127 14:09:37.763354 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d59e8bbd9278b3087043e31eff8b298427c4eca10607eab3ef614de2f50e7588"} Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.779931 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2cbaf55d319d98cbd8f3ec462c7d82a25aa507ca5999b9255ced1027b0526fc4"} Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.780243 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.780255 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23085da75f206e7c2c02be2cb148cf0a8b648efab516e1a4051fa76e0efda69c"} Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.780169 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.780273 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.782966 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.783022 4729 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba" exitCode=1 Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.783050 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba"} Jan 27 14:09:38 crc kubenswrapper[4729]: I0127 14:09:38.783454 4729 scope.go:117] "RemoveContainer" containerID="0d33057cc457d46d119b529a880cdd42f77c15998385ee987058ec162a81f1ba" Jan 27 14:09:39 crc kubenswrapper[4729]: I0127 14:09:39.790193 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 14:09:39 crc kubenswrapper[4729]: I0127 14:09:39.790494 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"19f48a7b3c21796d6caddfd33043a25a86343609cf4d8b8750df11afcd3263de"} Jan 27 14:09:41 crc kubenswrapper[4729]: I0127 14:09:41.067025 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:41 crc kubenswrapper[4729]: I0127 14:09:41.067108 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:41 crc kubenswrapper[4729]: I0127 14:09:41.072064 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:43 crc kubenswrapper[4729]: I0127 14:09:43.792653 4729 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:43 crc kubenswrapper[4729]: I0127 14:09:43.811416 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:43 crc kubenswrapper[4729]: I0127 14:09:43.811452 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:43 crc kubenswrapper[4729]: I0127 14:09:43.816170 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:44 crc kubenswrapper[4729]: I0127 14:09:44.082669 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2f36ea22-1209-4630-b58c-658aed15e8b2" Jan 27 14:09:44 crc kubenswrapper[4729]: I0127 14:09:44.815826 4729 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:44 crc kubenswrapper[4729]: I0127 14:09:44.817542 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12a89154-c512-4f7a-bec3-f2f415009cb0" Jan 27 14:09:44 crc kubenswrapper[4729]: I0127 14:09:44.820649 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2f36ea22-1209-4630-b58c-658aed15e8b2" Jan 27 14:09:45 crc kubenswrapper[4729]: I0127 14:09:45.835923 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:09:45 crc kubenswrapper[4729]: I0127 14:09:45.839434 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:09:46 crc kubenswrapper[4729]: I0127 14:09:46.825680 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:09:49 crc kubenswrapper[4729]: I0127 14:09:49.729521 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 14:09:49 crc kubenswrapper[4729]: I0127 14:09:49.813037 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.091296 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.186016 4729 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.190863 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=25.190846588 podStartE2EDuration="25.190846588s" podCreationTimestamp="2026-01-27 14:09:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:43.826774641 +0000 UTC m=+270.410965655" watchObservedRunningTime="2026-01-27 14:09:50.190846588 +0000 UTC m=+276.775037602" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.193003 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.193054 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.197133 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.212522 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=7.212507424 podStartE2EDuration="7.212507424s" podCreationTimestamp="2026-01-27 14:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:09:50.207841191 +0000 UTC m=+276.792032215" watchObservedRunningTime="2026-01-27 14:09:50.212507424 +0000 UTC m=+276.796698428" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.301462 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.534643 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.647542 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.877049 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.896260 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:09:50 crc kubenswrapper[4729]: I0127 14:09:50.994848 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.000503 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.234899 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.450523 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.493292 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.671474 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.736322 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 14:09:51 crc kubenswrapper[4729]: I0127 14:09:51.897549 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:09:52 crc kubenswrapper[4729]: I0127 14:09:52.610195 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.183259 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.233615 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.350403 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.454827 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.488142 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.539847 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 14:09:53 crc kubenswrapper[4729]: I0127 14:09:53.843209 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.011716 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.163011 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.181214 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.248192 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.269410 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.411329 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.429855 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.575630 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.763714 4729 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.764091 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://215c9f2769489d5b56991c81484981f4f5b493e0e884781598d44e405d019aee" gracePeriod=5 Jan 27 14:09:54 crc kubenswrapper[4729]: I0127 14:09:54.822035 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.057119 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.106828 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.239985 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.411442 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.471299 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.688140 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.745468 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.813432 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 14:09:55 crc kubenswrapper[4729]: I0127 14:09:55.970214 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 14:09:56 crc kubenswrapper[4729]: I0127 14:09:56.061593 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 14:09:56 crc kubenswrapper[4729]: I0127 14:09:56.094097 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 14:09:56 crc kubenswrapper[4729]: I0127 14:09:56.383464 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 14:09:56 crc kubenswrapper[4729]: I0127 14:09:56.652513 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 14:09:56 crc kubenswrapper[4729]: I0127 14:09:56.723824 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.128868 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.191189 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.328935 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.472202 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.590253 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.623039 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.660019 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 14:09:57 crc kubenswrapper[4729]: I0127 14:09:57.879891 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.024244 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.047209 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.116087 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.178697 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.230009 4729 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.256331 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.317247 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.548442 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.608506 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.722207 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 14:09:58 crc kubenswrapper[4729]: I0127 14:09:58.814685 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.052148 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.217547 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.225022 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.373130 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.398468 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.446826 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.488077 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.495165 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.674766 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.687455 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.687461 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.835134 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.893975 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.894260 4729 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="215c9f2769489d5b56991c81484981f4f5b493e0e884781598d44e405d019aee" exitCode=137 Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.894468 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.935021 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 14:09:59 crc kubenswrapper[4729]: I0127 14:09:59.975119 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.207248 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.242175 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.331965 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.332032 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.333919 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.410749 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411083 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.410950 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411110 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411207 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411310 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411338 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411504 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411755 4729 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411785 4729 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411795 4729 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.411807 4729 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.423406 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.502452 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.513095 4729 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.551828 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.644390 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.671300 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.826396 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.901266 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.901355 4729 scope.go:117] "RemoveContainer" containerID="215c9f2769489d5b56991c81484981f4f5b493e0e884781598d44e405d019aee" Jan 27 14:10:00 crc kubenswrapper[4729]: I0127 14:10:00.901429 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.032193 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.070135 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.129054 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.135510 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.198332 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.318507 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.331284 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.394539 4729 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.498005 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.555411 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.556683 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.556990 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.702102 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.725533 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.735810 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.788430 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.876273 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 14:10:01 crc kubenswrapper[4729]: I0127 14:10:01.961050 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.060204 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.060618 4729 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.074479 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.074510 4729 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="69d9a791-d634-449d-9e6e-58cd198db75b" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.079220 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.079258 4729 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="69d9a791-d634-449d-9e6e-58cd198db75b" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.105277 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.179412 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.240931 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.325457 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.338959 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.468576 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.506185 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.621372 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.650385 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.705616 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.746793 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.749918 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.958743 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:10:02 crc kubenswrapper[4729]: I0127 14:10:02.969481 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.041486 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.060536 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.102999 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.143693 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.160572 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.175661 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.278558 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.363500 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.393493 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.487140 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.526725 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.717321 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.896303 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.958152 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 14:10:03 crc kubenswrapper[4729]: I0127 14:10:03.967932 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.091737 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.173144 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.235475 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.270649 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.274002 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.345668 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.351577 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.366421 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.519919 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.599396 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.687890 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.688952 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.776524 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.805858 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.823363 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.856450 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.856968 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.902634 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.911028 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 14:10:04 crc kubenswrapper[4729]: I0127 14:10:04.983511 4729 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.021792 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.055567 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.080747 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.177624 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.248351 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.442288 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.460900 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.565523 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.584013 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.603416 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.624695 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.696670 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.745402 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.764025 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.836999 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.970103 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 14:10:05 crc kubenswrapper[4729]: I0127 14:10:05.976462 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.170717 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.274308 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.323558 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.355650 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.446531 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.488986 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.561302 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.579364 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.870791 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.899953 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 14:10:06 crc kubenswrapper[4729]: I0127 14:10:06.946126 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.074453 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.129495 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.187063 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.268055 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.276854 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.281905 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.301503 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.321816 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.355811 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.372010 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.534122 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.629937 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.743216 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.853550 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.892784 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 14:10:07 crc kubenswrapper[4729]: I0127 14:10:07.904842 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.078663 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.141926 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.377368 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.382174 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.400820 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.515471 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.556304 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.620951 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.630340 4729 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.635224 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.673727 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.682071 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.684681 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.779434 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.801708 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.932556 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.957199 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 14:10:08 crc kubenswrapper[4729]: I0127 14:10:08.992813 4729 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.082973 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.107404 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.214840 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.311469 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.374492 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.380412 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.420841 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.831467 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.878119 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:10:09 crc kubenswrapper[4729]: I0127 14:10:09.917104 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.141779 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.225077 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.321812 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.554749 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.706746 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.750487 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.760741 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.815480 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.877384 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 14:10:10 crc kubenswrapper[4729]: I0127 14:10:10.908493 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 14:10:11 crc kubenswrapper[4729]: I0127 14:10:11.378403 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 14:10:11 crc kubenswrapper[4729]: I0127 14:10:11.419524 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 14:10:11 crc kubenswrapper[4729]: I0127 14:10:11.595038 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:10:11 crc kubenswrapper[4729]: I0127 14:10:11.741721 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 14:10:11 crc kubenswrapper[4729]: I0127 14:10:11.844049 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:11.999973 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:12.118809 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:12.275311 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:12.784981 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:12.804321 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 14:10:12 crc kubenswrapper[4729]: I0127 14:10:12.944674 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 14:10:13 crc kubenswrapper[4729]: I0127 14:10:13.266894 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 14:10:13 crc kubenswrapper[4729]: I0127 14:10:13.859134 4729 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 14:10:14 crc kubenswrapper[4729]: I0127 14:10:14.105997 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 14:10:14 crc kubenswrapper[4729]: I0127 14:10:14.176112 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.283434 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.284243 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" podUID="11e15569-f897-42c7-b765-a42aec47482e" containerName="controller-manager" containerID="cri-o://768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2" gracePeriod=30 Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.403998 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.404554 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" podUID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" containerName="route-controller-manager" containerID="cri-o://8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a" gracePeriod=30 Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.657833 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.728929 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.781869 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca\") pod \"11e15569-f897-42c7-b765-a42aec47482e\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782010 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw7nr\" (UniqueName: \"kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr\") pod \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782040 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config\") pod \"11e15569-f897-42c7-b765-a42aec47482e\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782072 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert\") pod \"11e15569-f897-42c7-b765-a42aec47482e\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert\") pod \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782138 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca\") pod \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782160 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmnxm\" (UniqueName: \"kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm\") pod \"11e15569-f897-42c7-b765-a42aec47482e\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782191 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles\") pod \"11e15569-f897-42c7-b765-a42aec47482e\" (UID: \"11e15569-f897-42c7-b765-a42aec47482e\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.782225 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config\") pod \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\" (UID: \"f999fa2c-7d62-43f5-b593-385b13d5b6f2\") " Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.783087 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca" (OuterVolumeSpecName: "client-ca") pod "f999fa2c-7d62-43f5-b593-385b13d5b6f2" (UID: "f999fa2c-7d62-43f5-b593-385b13d5b6f2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.783104 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "11e15569-f897-42c7-b765-a42aec47482e" (UID: "11e15569-f897-42c7-b765-a42aec47482e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.783194 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca" (OuterVolumeSpecName: "client-ca") pod "11e15569-f897-42c7-b765-a42aec47482e" (UID: "11e15569-f897-42c7-b765-a42aec47482e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.783205 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config" (OuterVolumeSpecName: "config") pod "f999fa2c-7d62-43f5-b593-385b13d5b6f2" (UID: "f999fa2c-7d62-43f5-b593-385b13d5b6f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.783576 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config" (OuterVolumeSpecName: "config") pod "11e15569-f897-42c7-b765-a42aec47482e" (UID: "11e15569-f897-42c7-b765-a42aec47482e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.787446 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "11e15569-f897-42c7-b765-a42aec47482e" (UID: "11e15569-f897-42c7-b765-a42aec47482e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.787544 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr" (OuterVolumeSpecName: "kube-api-access-bw7nr") pod "f999fa2c-7d62-43f5-b593-385b13d5b6f2" (UID: "f999fa2c-7d62-43f5-b593-385b13d5b6f2"). InnerVolumeSpecName "kube-api-access-bw7nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.787554 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f999fa2c-7d62-43f5-b593-385b13d5b6f2" (UID: "f999fa2c-7d62-43f5-b593-385b13d5b6f2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.788190 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm" (OuterVolumeSpecName: "kube-api-access-fmnxm") pod "11e15569-f897-42c7-b765-a42aec47482e" (UID: "11e15569-f897-42c7-b765-a42aec47482e"). InnerVolumeSpecName "kube-api-access-fmnxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883363 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f999fa2c-7d62-43f5-b593-385b13d5b6f2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883400 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883411 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmnxm\" (UniqueName: \"kubernetes.io/projected/11e15569-f897-42c7-b765-a42aec47482e-kube-api-access-fmnxm\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883421 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883430 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f999fa2c-7d62-43f5-b593-385b13d5b6f2-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883438 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883467 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw7nr\" (UniqueName: \"kubernetes.io/projected/f999fa2c-7d62-43f5-b593-385b13d5b6f2-kube-api-access-bw7nr\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883476 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11e15569-f897-42c7-b765-a42aec47482e-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:36 crc kubenswrapper[4729]: I0127 14:10:36.883484 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11e15569-f897-42c7-b765-a42aec47482e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.103847 4729 generic.go:334] "Generic (PLEG): container finished" podID="11e15569-f897-42c7-b765-a42aec47482e" containerID="768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2" exitCode=0 Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.103946 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" event={"ID":"11e15569-f897-42c7-b765-a42aec47482e","Type":"ContainerDied","Data":"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2"} Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.103979 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" event={"ID":"11e15569-f897-42c7-b765-a42aec47482e","Type":"ContainerDied","Data":"e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea"} Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.103999 4729 scope.go:117] "RemoveContainer" containerID="768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.104004 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.106662 4729 generic.go:334] "Generic (PLEG): container finished" podID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" containerID="8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a" exitCode=0 Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.106694 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" event={"ID":"f999fa2c-7d62-43f5-b593-385b13d5b6f2","Type":"ContainerDied","Data":"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a"} Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.106713 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" event={"ID":"f999fa2c-7d62-43f5-b593-385b13d5b6f2","Type":"ContainerDied","Data":"7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c"} Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.106774 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.119208 4729 scope.go:117] "RemoveContainer" containerID="768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2" Jan 27 14:10:37 crc kubenswrapper[4729]: E0127 14:10:37.119618 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2\": container with ID starting with 768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2 not found: ID does not exist" containerID="768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.119648 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2"} err="failed to get container status \"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2\": rpc error: code = NotFound desc = could not find container \"768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2\": container with ID starting with 768ef7dd2245a211050e767c7071981114af7c8a6cd25df9c22756d329f53cf2 not found: ID does not exist" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.119669 4729 scope.go:117] "RemoveContainer" containerID="8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.134035 4729 scope.go:117] "RemoveContainer" containerID="8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a" Jan 27 14:10:37 crc kubenswrapper[4729]: E0127 14:10:37.134804 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a\": container with ID starting with 8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a not found: ID does not exist" containerID="8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.134834 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a"} err="failed to get container status \"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a\": rpc error: code = NotFound desc = could not find container \"8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a\": container with ID starting with 8d7178ae1267324a95e22ded41aa631c0b627048db1069c5f3f4753ec4394d4a not found: ID does not exist" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.142197 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.145630 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp"] Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.149779 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.152534 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fbb49c55-74c5r"] Jan 27 14:10:37 crc kubenswrapper[4729]: E0127 14:10:37.164360 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e15569_f897_42c7_b765_a42aec47482e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf999fa2c_7d62_43f5_b593_385b13d5b6f2.slice/crio-7722d23203ffefdb3b74e1de0c78f02c48a66373180a2c16f1d00f74c9bba76c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11e15569_f897_42c7_b765_a42aec47482e.slice/crio-e13016c96cc68ca79e239a1d208f7921bd8222c1744a04239c82c917ce89dfea\": RecentStats: unable to find data in memory cache]" Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.597041 4729 patch_prober.go:28] interesting pod/controller-manager-75ccfb69d9-8lvwp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:10:37 crc kubenswrapper[4729]: I0127 14:10:37.597401 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-75ccfb69d9-8lvwp" podUID="11e15569-f897-42c7-b765-a42aec47482e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.057783 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e15569-f897-42c7-b765-a42aec47482e" path="/var/lib/kubelet/pods/11e15569-f897-42c7-b765-a42aec47482e/volumes" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.058955 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" path="/var/lib/kubelet/pods/f999fa2c-7d62-43f5-b593-385b13d5b6f2/volumes" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.100833 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:38 crc kubenswrapper[4729]: E0127 14:10:38.101738 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" containerName="installer" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.101929 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" containerName="installer" Jan 27 14:10:38 crc kubenswrapper[4729]: E0127 14:10:38.102072 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.102186 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:10:38 crc kubenswrapper[4729]: E0127 14:10:38.102309 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11e15569-f897-42c7-b765-a42aec47482e" containerName="controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.102421 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="11e15569-f897-42c7-b765-a42aec47482e" containerName="controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: E0127 14:10:38.102553 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" containerName="route-controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.102663 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" containerName="route-controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.102962 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="11e15569-f897-42c7-b765-a42aec47482e" containerName="controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.103309 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.103433 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efef83a-a7bb-46a3-b382-e040b7804bf5" containerName="installer" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.103554 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f999fa2c-7d62-43f5-b593-385b13d5b6f2" containerName="route-controller-manager" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.104228 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.107635 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.107988 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.107743 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.108374 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.107808 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.108551 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.108863 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.111360 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.113872 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.115707 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.116021 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.116185 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.116307 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.116737 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.117175 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.120699 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.127445 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.196920 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.196978 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.196999 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197016 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197060 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197082 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197108 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbpk2\" (UniqueName: \"kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197224 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.197283 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g668\" (UniqueName: \"kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298434 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298509 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298533 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298552 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298593 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298618 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298644 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbpk2\" (UniqueName: \"kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298665 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.298687 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g668\" (UniqueName: \"kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.299685 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.300668 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.300987 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.301567 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.302467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.304454 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.304907 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.327504 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g668\" (UniqueName: \"kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668\") pod \"controller-manager-68ff788d6c-nq7dn\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.330105 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbpk2\" (UniqueName: \"kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2\") pod \"route-controller-manager-6869846656-sc7pb\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.433561 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.442740 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.640863 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:38 crc kubenswrapper[4729]: I0127 14:10:38.675142 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:38 crc kubenswrapper[4729]: W0127 14:10:38.685004 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccc8f5fc_762b_45f3_afdb_aab7d7d27c68.slice/crio-ab865945de220b6942800cff9e5547bacb2b561ddc4328ff2000203c8bb81c38 WatchSource:0}: Error finding container ab865945de220b6942800cff9e5547bacb2b561ddc4328ff2000203c8bb81c38: Status 404 returned error can't find the container with id ab865945de220b6942800cff9e5547bacb2b561ddc4328ff2000203c8bb81c38 Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.132754 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" event={"ID":"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68","Type":"ContainerStarted","Data":"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb"} Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.133245 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.133273 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" event={"ID":"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68","Type":"ContainerStarted","Data":"ab865945de220b6942800cff9e5547bacb2b561ddc4328ff2000203c8bb81c38"} Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.134373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" event={"ID":"ca9d23b2-558a-4516-be7c-87cd20c7fc56","Type":"ContainerStarted","Data":"2dd5078a783a7ce6050e0950049d7b2e32108f3c1ba7695f840ae0cb89d37ab9"} Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.134401 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" event={"ID":"ca9d23b2-558a-4516-be7c-87cd20c7fc56","Type":"ContainerStarted","Data":"3683d06f8bd54c00c7bffce63b276649f706fc21b16cbaef08fddadecd6adaa6"} Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.134645 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.141696 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.146173 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.155221 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" podStartSLOduration=3.155201105 podStartE2EDuration="3.155201105s" podCreationTimestamp="2026-01-27 14:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:10:39.150867035 +0000 UTC m=+325.735058049" watchObservedRunningTime="2026-01-27 14:10:39.155201105 +0000 UTC m=+325.739392109" Jan 27 14:10:39 crc kubenswrapper[4729]: I0127 14:10:39.172828 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" podStartSLOduration=3.172810562 podStartE2EDuration="3.172810562s" podCreationTimestamp="2026-01-27 14:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:10:39.170066732 +0000 UTC m=+325.754257776" watchObservedRunningTime="2026-01-27 14:10:39.172810562 +0000 UTC m=+325.757001566" Jan 27 14:10:43 crc kubenswrapper[4729]: I0127 14:10:43.959072 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:10:43 crc kubenswrapper[4729]: I0127 14:10:43.960210 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7gttv" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="registry-server" containerID="cri-o://c1392bfc9a308a3edb08f3684e12a5f214428669657e1fc9b5bdabff6b670cc8" gracePeriod=2 Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.162352 4729 generic.go:334] "Generic (PLEG): container finished" podID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerID="c1392bfc9a308a3edb08f3684e12a5f214428669657e1fc9b5bdabff6b670cc8" exitCode=0 Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.162422 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerDied","Data":"c1392bfc9a308a3edb08f3684e12a5f214428669657e1fc9b5bdabff6b670cc8"} Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.896891 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.981103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content\") pod \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.981172 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wxzn\" (UniqueName: \"kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn\") pod \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.981220 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities\") pod \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\" (UID: \"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9\") " Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.982196 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities" (OuterVolumeSpecName: "utilities") pod "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" (UID: "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:10:44 crc kubenswrapper[4729]: I0127 14:10:44.986510 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn" (OuterVolumeSpecName: "kube-api-access-7wxzn") pod "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" (UID: "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9"). InnerVolumeSpecName "kube-api-access-7wxzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.082053 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wxzn\" (UniqueName: \"kubernetes.io/projected/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-kube-api-access-7wxzn\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.082082 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.092838 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" (UID: "6eb2d1a1-31be-45bc-b6b2-ac53d002dba9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.169475 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gttv" event={"ID":"6eb2d1a1-31be-45bc-b6b2-ac53d002dba9","Type":"ContainerDied","Data":"3902ded9e07399fd11fe250e6a8a99d47a197848d6e96ccf6dc65cb3e81f3e8b"} Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.169509 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gttv" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.169538 4729 scope.go:117] "RemoveContainer" containerID="c1392bfc9a308a3edb08f3684e12a5f214428669657e1fc9b5bdabff6b670cc8" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.183007 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.185800 4729 scope.go:117] "RemoveContainer" containerID="bb7f800e512ce47ca4c604ce176f180c48df54f9a8697a69da83605e1cc824f2" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.207486 4729 scope.go:117] "RemoveContainer" containerID="c0d9425a4d7822ca8da39235bdb08514bcf7176312fc8892d8ca6981f2609c82" Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.211183 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:10:45 crc kubenswrapper[4729]: I0127 14:10:45.215563 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7gttv"] Jan 27 14:10:46 crc kubenswrapper[4729]: I0127 14:10:46.058048 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" path="/var/lib/kubelet/pods/6eb2d1a1-31be-45bc-b6b2-ac53d002dba9/volumes" Jan 27 14:10:52 crc kubenswrapper[4729]: I0127 14:10:52.655186 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:10:52 crc kubenswrapper[4729]: I0127 14:10:52.655763 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:10:55 crc kubenswrapper[4729]: I0127 14:10:55.672252 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:55 crc kubenswrapper[4729]: I0127 14:10:55.672504 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" podUID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" containerName="controller-manager" containerID="cri-o://2dd5078a783a7ce6050e0950049d7b2e32108f3c1ba7695f840ae0cb89d37ab9" gracePeriod=30 Jan 27 14:10:55 crc kubenswrapper[4729]: I0127 14:10:55.689186 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:55 crc kubenswrapper[4729]: I0127 14:10:55.689739 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" podUID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" containerName="route-controller-manager" containerID="cri-o://58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb" gracePeriod=30 Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.204928 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.224667 4729 generic.go:334] "Generic (PLEG): container finished" podID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" containerID="2dd5078a783a7ce6050e0950049d7b2e32108f3c1ba7695f840ae0cb89d37ab9" exitCode=0 Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.224765 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" event={"ID":"ca9d23b2-558a-4516-be7c-87cd20c7fc56","Type":"ContainerDied","Data":"2dd5078a783a7ce6050e0950049d7b2e32108f3c1ba7695f840ae0cb89d37ab9"} Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.226027 4729 generic.go:334] "Generic (PLEG): container finished" podID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" containerID="58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb" exitCode=0 Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.226054 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" event={"ID":"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68","Type":"ContainerDied","Data":"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb"} Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.226071 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" event={"ID":"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68","Type":"ContainerDied","Data":"ab865945de220b6942800cff9e5547bacb2b561ddc4328ff2000203c8bb81c38"} Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.226088 4729 scope.go:117] "RemoveContainer" containerID="58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.226279 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.292306 4729 scope.go:117] "RemoveContainer" containerID="58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb" Jan 27 14:10:56 crc kubenswrapper[4729]: E0127 14:10:56.293245 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb\": container with ID starting with 58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb not found: ID does not exist" containerID="58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.293386 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb"} err="failed to get container status \"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb\": rpc error: code = NotFound desc = could not find container \"58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb\": container with ID starting with 58bb645903a6227dd5cf754d4f4c87d6221bd3ef194bd9030da7c99d1b8cb2eb not found: ID does not exist" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.316823 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca\") pod \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.317164 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert\") pod \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.317543 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbpk2\" (UniqueName: \"kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2\") pod \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.317641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config\") pod \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\" (UID: \"ccc8f5fc-762b-45f3-afdb-aab7d7d27c68\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.318163 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config" (OuterVolumeSpecName: "config") pod "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" (UID: "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.319218 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca" (OuterVolumeSpecName: "client-ca") pod "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" (UID: "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.321926 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" (UID: "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.322293 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2" (OuterVolumeSpecName: "kube-api-access-gbpk2") pod "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" (UID: "ccc8f5fc-762b-45f3-afdb-aab7d7d27c68"). InnerVolumeSpecName "kube-api-access-gbpk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.333478 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert\") pod \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419455 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles\") pod \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419529 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g668\" (UniqueName: \"kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668\") pod \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419593 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca\") pod \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419615 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config\") pod \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\" (UID: \"ca9d23b2-558a-4516-be7c-87cd20c7fc56\") " Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419854 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419890 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbpk2\" (UniqueName: \"kubernetes.io/projected/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-kube-api-access-gbpk2\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419908 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.419918 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.420642 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ca9d23b2-558a-4516-be7c-87cd20c7fc56" (UID: "ca9d23b2-558a-4516-be7c-87cd20c7fc56"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.420676 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca" (OuterVolumeSpecName: "client-ca") pod "ca9d23b2-558a-4516-be7c-87cd20c7fc56" (UID: "ca9d23b2-558a-4516-be7c-87cd20c7fc56"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.420703 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config" (OuterVolumeSpecName: "config") pod "ca9d23b2-558a-4516-be7c-87cd20c7fc56" (UID: "ca9d23b2-558a-4516-be7c-87cd20c7fc56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.423163 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668" (OuterVolumeSpecName: "kube-api-access-7g668") pod "ca9d23b2-558a-4516-be7c-87cd20c7fc56" (UID: "ca9d23b2-558a-4516-be7c-87cd20c7fc56"). InnerVolumeSpecName "kube-api-access-7g668". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.423408 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ca9d23b2-558a-4516-be7c-87cd20c7fc56" (UID: "ca9d23b2-558a-4516-be7c-87cd20c7fc56"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.521336 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.521386 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca9d23b2-558a-4516-be7c-87cd20c7fc56-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.521395 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g668\" (UniqueName: \"kubernetes.io/projected/ca9d23b2-558a-4516-be7c-87cd20c7fc56-kube-api-access-7g668\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.521405 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.521413 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca9d23b2-558a-4516-be7c-87cd20c7fc56-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.556853 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:56 crc kubenswrapper[4729]: I0127 14:10:56.559979 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6869846656-sc7pb"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110368 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:10:57 crc kubenswrapper[4729]: E0127 14:10:57.110608 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" containerName="route-controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110622 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" containerName="route-controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: E0127 14:10:57.110635 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="extract-content" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110647 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="extract-content" Jan 27 14:10:57 crc kubenswrapper[4729]: E0127 14:10:57.110667 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="registry-server" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110676 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="registry-server" Jan 27 14:10:57 crc kubenswrapper[4729]: E0127 14:10:57.110692 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="extract-utilities" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110701 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="extract-utilities" Jan 27 14:10:57 crc kubenswrapper[4729]: E0127 14:10:57.110710 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" containerName="controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110718 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" containerName="controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110826 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" containerName="controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110843 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb2d1a1-31be-45bc-b6b2-ac53d002dba9" containerName="registry-server" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.110857 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" containerName="route-controller-manager" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.111355 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.113860 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.114605 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120236 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120319 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120324 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120331 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120241 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.120241 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.122266 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.128115 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.230770 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.230811 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.230828 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.230856 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djds\" (UniqueName: \"kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.230966 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.231160 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjkb7\" (UniqueName: \"kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.231193 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.231301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.231378 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.233632 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" event={"ID":"ca9d23b2-558a-4516-be7c-87cd20c7fc56","Type":"ContainerDied","Data":"3683d06f8bd54c00c7bffce63b276649f706fc21b16cbaef08fddadecd6adaa6"} Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.233695 4729 scope.go:117] "RemoveContainer" containerID="2dd5078a783a7ce6050e0950049d7b2e32108f3c1ba7695f840ae0cb89d37ab9" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.233900 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68ff788d6c-nq7dn" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.260446 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.263854 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-68ff788d6c-nq7dn"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.332907 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.332955 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.332974 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.332998 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djds\" (UniqueName: \"kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.333021 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.333053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjkb7\" (UniqueName: \"kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.333070 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.333094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.333119 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.334294 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.335083 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.335419 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.335494 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.335568 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.337791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.339548 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.351663 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjkb7\" (UniqueName: \"kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7\") pod \"controller-manager-6bb966dbc8-759gv\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.351745 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djds\" (UniqueName: \"kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds\") pod \"route-controller-manager-6765bdc4c5-6fxff\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.426938 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.438415 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.807342 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:10:57 crc kubenswrapper[4729]: I0127 14:10:57.848970 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:10:57 crc kubenswrapper[4729]: W0127 14:10:57.854375 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e57a3ca_54c8_4747_b155_875e8e883082.slice/crio-83afeb29304c727ecc412974ab6269686d31f2b7f45032ca1a07e75aa0926921 WatchSource:0}: Error finding container 83afeb29304c727ecc412974ab6269686d31f2b7f45032ca1a07e75aa0926921: Status 404 returned error can't find the container with id 83afeb29304c727ecc412974ab6269686d31f2b7f45032ca1a07e75aa0926921 Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.057741 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca9d23b2-558a-4516-be7c-87cd20c7fc56" path="/var/lib/kubelet/pods/ca9d23b2-558a-4516-be7c-87cd20c7fc56/volumes" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.058716 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc8f5fc-762b-45f3-afdb-aab7d7d27c68" path="/var/lib/kubelet/pods/ccc8f5fc-762b-45f3-afdb-aab7d7d27c68/volumes" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.239979 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" event={"ID":"9bc6734c-0ee3-460f-86ac-d6d75677e535","Type":"ContainerStarted","Data":"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24"} Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.241702 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" event={"ID":"9bc6734c-0ee3-460f-86ac-d6d75677e535","Type":"ContainerStarted","Data":"cbc615218c72c0c6116df8909d07caae9fa48f8bc23e2c3479c7ef0f4e544742"} Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.241865 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.243574 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" event={"ID":"2e57a3ca-54c8-4747-b155-875e8e883082","Type":"ContainerStarted","Data":"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377"} Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.244068 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.244160 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" event={"ID":"2e57a3ca-54c8-4747-b155-875e8e883082","Type":"ContainerStarted","Data":"83afeb29304c727ecc412974ab6269686d31f2b7f45032ca1a07e75aa0926921"} Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.246183 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.256375 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" podStartSLOduration=3.256358307 podStartE2EDuration="3.256358307s" podCreationTimestamp="2026-01-27 14:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:10:58.254697965 +0000 UTC m=+344.838888969" watchObservedRunningTime="2026-01-27 14:10:58.256358307 +0000 UTC m=+344.840549311" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.288936 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" podStartSLOduration=3.288916513 podStartE2EDuration="3.288916513s" podCreationTimestamp="2026-01-27 14:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:10:58.26946534 +0000 UTC m=+344.853656364" watchObservedRunningTime="2026-01-27 14:10:58.288916513 +0000 UTC m=+344.873107517" Jan 27 14:10:58 crc kubenswrapper[4729]: I0127 14:10:58.341526 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:11:15 crc kubenswrapper[4729]: I0127 14:11:15.673859 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:11:15 crc kubenswrapper[4729]: I0127 14:11:15.674795 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" podUID="9bc6734c-0ee3-460f-86ac-d6d75677e535" containerName="controller-manager" containerID="cri-o://f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24" gracePeriod=30 Jan 27 14:11:15 crc kubenswrapper[4729]: I0127 14:11:15.770620 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:11:15 crc kubenswrapper[4729]: I0127 14:11:15.770819 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" podUID="2e57a3ca-54c8-4747-b155-875e8e883082" containerName="route-controller-manager" containerID="cri-o://f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377" gracePeriod=30 Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.214399 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.219733 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265659 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9djds\" (UniqueName: \"kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds\") pod \"2e57a3ca-54c8-4747-b155-875e8e883082\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265725 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca\") pod \"9bc6734c-0ee3-460f-86ac-d6d75677e535\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config\") pod \"2e57a3ca-54c8-4747-b155-875e8e883082\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265784 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config\") pod \"9bc6734c-0ee3-460f-86ac-d6d75677e535\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265826 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert\") pod \"9bc6734c-0ee3-460f-86ac-d6d75677e535\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265851 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert\") pod \"2e57a3ca-54c8-4747-b155-875e8e883082\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265895 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca\") pod \"2e57a3ca-54c8-4747-b155-875e8e883082\" (UID: \"2e57a3ca-54c8-4747-b155-875e8e883082\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265925 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjkb7\" (UniqueName: \"kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7\") pod \"9bc6734c-0ee3-460f-86ac-d6d75677e535\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.265961 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles\") pod \"9bc6734c-0ee3-460f-86ac-d6d75677e535\" (UID: \"9bc6734c-0ee3-460f-86ac-d6d75677e535\") " Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.266514 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca" (OuterVolumeSpecName: "client-ca") pod "9bc6734c-0ee3-460f-86ac-d6d75677e535" (UID: "9bc6734c-0ee3-460f-86ac-d6d75677e535"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.266591 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9bc6734c-0ee3-460f-86ac-d6d75677e535" (UID: "9bc6734c-0ee3-460f-86ac-d6d75677e535"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.266689 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config" (OuterVolumeSpecName: "config") pod "2e57a3ca-54c8-4747-b155-875e8e883082" (UID: "2e57a3ca-54c8-4747-b155-875e8e883082"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.267066 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e57a3ca-54c8-4747-b155-875e8e883082" (UID: "2e57a3ca-54c8-4747-b155-875e8e883082"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.267184 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config" (OuterVolumeSpecName: "config") pod "9bc6734c-0ee3-460f-86ac-d6d75677e535" (UID: "9bc6734c-0ee3-460f-86ac-d6d75677e535"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.271310 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e57a3ca-54c8-4747-b155-875e8e883082" (UID: "2e57a3ca-54c8-4747-b155-875e8e883082"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.271576 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7" (OuterVolumeSpecName: "kube-api-access-mjkb7") pod "9bc6734c-0ee3-460f-86ac-d6d75677e535" (UID: "9bc6734c-0ee3-460f-86ac-d6d75677e535"). InnerVolumeSpecName "kube-api-access-mjkb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.271709 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds" (OuterVolumeSpecName: "kube-api-access-9djds") pod "2e57a3ca-54c8-4747-b155-875e8e883082" (UID: "2e57a3ca-54c8-4747-b155-875e8e883082"). InnerVolumeSpecName "kube-api-access-9djds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.273079 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9bc6734c-0ee3-460f-86ac-d6d75677e535" (UID: "9bc6734c-0ee3-460f-86ac-d6d75677e535"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.337025 4729 generic.go:334] "Generic (PLEG): container finished" podID="9bc6734c-0ee3-460f-86ac-d6d75677e535" containerID="f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24" exitCode=0 Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.337095 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" event={"ID":"9bc6734c-0ee3-460f-86ac-d6d75677e535","Type":"ContainerDied","Data":"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24"} Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.337127 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" event={"ID":"9bc6734c-0ee3-460f-86ac-d6d75677e535","Type":"ContainerDied","Data":"cbc615218c72c0c6116df8909d07caae9fa48f8bc23e2c3479c7ef0f4e544742"} Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.337149 4729 scope.go:117] "RemoveContainer" containerID="f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.337260 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6bb966dbc8-759gv" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.340946 4729 generic.go:334] "Generic (PLEG): container finished" podID="2e57a3ca-54c8-4747-b155-875e8e883082" containerID="f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377" exitCode=0 Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.340994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" event={"ID":"2e57a3ca-54c8-4747-b155-875e8e883082","Type":"ContainerDied","Data":"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377"} Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.341015 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.341037 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff" event={"ID":"2e57a3ca-54c8-4747-b155-875e8e883082","Type":"ContainerDied","Data":"83afeb29304c727ecc412974ab6269686d31f2b7f45032ca1a07e75aa0926921"} Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.354222 4729 scope.go:117] "RemoveContainer" containerID="f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24" Jan 27 14:11:16 crc kubenswrapper[4729]: E0127 14:11:16.354575 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24\": container with ID starting with f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24 not found: ID does not exist" containerID="f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.354616 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24"} err="failed to get container status \"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24\": rpc error: code = NotFound desc = could not find container \"f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24\": container with ID starting with f5a0415598720b8f0a15331ca07267e60f5cc77467a2869d9cd9828add3d6f24 not found: ID does not exist" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.354643 4729 scope.go:117] "RemoveContainer" containerID="f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.365699 4729 scope.go:117] "RemoveContainer" containerID="f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367558 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9djds\" (UniqueName: \"kubernetes.io/projected/2e57a3ca-54c8-4747-b155-875e8e883082-kube-api-access-9djds\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367589 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367599 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367611 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367622 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bc6734c-0ee3-460f-86ac-d6d75677e535-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367635 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e57a3ca-54c8-4747-b155-875e8e883082-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367649 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e57a3ca-54c8-4747-b155-875e8e883082-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367662 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjkb7\" (UniqueName: \"kubernetes.io/projected/9bc6734c-0ee3-460f-86ac-d6d75677e535-kube-api-access-mjkb7\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367671 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9bc6734c-0ee3-460f-86ac-d6d75677e535-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:16 crc kubenswrapper[4729]: E0127 14:11:16.367606 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377\": container with ID starting with f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377 not found: ID does not exist" containerID="f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.367732 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377"} err="failed to get container status \"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377\": rpc error: code = NotFound desc = could not find container \"f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377\": container with ID starting with f8246ca657b7538a980e7dd99a6021e28f4f8af9a6fe574a518237f34ac94377 not found: ID does not exist" Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.371800 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.377124 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6765bdc4c5-6fxff"] Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.387722 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:11:16 crc kubenswrapper[4729]: I0127 14:11:16.390704 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6bb966dbc8-759gv"] Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.122957 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:17 crc kubenswrapper[4729]: E0127 14:11:17.123285 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e57a3ca-54c8-4747-b155-875e8e883082" containerName="route-controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.123305 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e57a3ca-54c8-4747-b155-875e8e883082" containerName="route-controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: E0127 14:11:17.123331 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc6734c-0ee3-460f-86ac-d6d75677e535" containerName="controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.123342 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc6734c-0ee3-460f-86ac-d6d75677e535" containerName="controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.123498 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e57a3ca-54c8-4747-b155-875e8e883082" containerName="route-controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.123545 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc6734c-0ee3-460f-86ac-d6d75677e535" containerName="controller-manager" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.124150 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.128419 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.128932 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.129209 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.129417 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.129500 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.129640 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.130083 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.130728 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.135806 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.136058 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.136555 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.136631 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.136759 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.136771 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.138248 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.143668 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.144456 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178622 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178666 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178688 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f9cs\" (UniqueName: \"kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178759 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178790 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178862 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zt9z\" (UniqueName: \"kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178918 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.178944 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281331 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281399 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281475 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281504 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281527 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f9cs\" (UniqueName: \"kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281613 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281667 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.281722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zt9z\" (UniqueName: \"kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.282773 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.282800 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.283307 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.283302 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.286674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.288700 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.292395 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.299842 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f9cs\" (UniqueName: \"kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs\") pod \"route-controller-manager-8569c87685-4x22z\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.301075 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zt9z\" (UniqueName: \"kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z\") pod \"controller-manager-7579d7c77b-cxx7k\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.457759 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.474326 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.653501 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:17 crc kubenswrapper[4729]: W0127 14:11:17.660705 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d56fe3a_8b4f_4174_ab29_2c0fb5fca61c.slice/crio-fb04c7b8780e8ca47d57f331741e5e744370d4bd2b2dbafa7cb90551db68738b WatchSource:0}: Error finding container fb04c7b8780e8ca47d57f331741e5e744370d4bd2b2dbafa7cb90551db68738b: Status 404 returned error can't find the container with id fb04c7b8780e8ca47d57f331741e5e744370d4bd2b2dbafa7cb90551db68738b Jan 27 14:11:17 crc kubenswrapper[4729]: I0127 14:11:17.917750 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.058569 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e57a3ca-54c8-4747-b155-875e8e883082" path="/var/lib/kubelet/pods/2e57a3ca-54c8-4747-b155-875e8e883082/volumes" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.059411 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc6734c-0ee3-460f-86ac-d6d75677e535" path="/var/lib/kubelet/pods/9bc6734c-0ee3-460f-86ac-d6d75677e535/volumes" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.353413 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" event={"ID":"ef7aff78-2240-458c-b7d7-3b4f32e493d4","Type":"ContainerStarted","Data":"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739"} Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.353456 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" event={"ID":"ef7aff78-2240-458c-b7d7-3b4f32e493d4","Type":"ContainerStarted","Data":"c0ad16148b66874a74c95874ed908dbffb41882672c498f1e6958b388351f466"} Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.353618 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.355068 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" event={"ID":"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c","Type":"ContainerStarted","Data":"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c"} Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.355140 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" event={"ID":"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c","Type":"ContainerStarted","Data":"fb04c7b8780e8ca47d57f331741e5e744370d4bd2b2dbafa7cb90551db68738b"} Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.355611 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.360082 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.362108 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.380216 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" podStartSLOduration=3.380195604 podStartE2EDuration="3.380195604s" podCreationTimestamp="2026-01-27 14:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:18.377134846 +0000 UTC m=+364.961325860" watchObservedRunningTime="2026-01-27 14:11:18.380195604 +0000 UTC m=+364.964386618" Jan 27 14:11:18 crc kubenswrapper[4729]: I0127 14:11:18.414004 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" podStartSLOduration=3.41398459 podStartE2EDuration="3.41398459s" podCreationTimestamp="2026-01-27 14:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:18.412621086 +0000 UTC m=+364.996812100" watchObservedRunningTime="2026-01-27 14:11:18.41398459 +0000 UTC m=+364.998175594" Jan 27 14:11:22 crc kubenswrapper[4729]: I0127 14:11:22.656249 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:11:22 crc kubenswrapper[4729]: I0127 14:11:22.656620 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:11:35 crc kubenswrapper[4729]: I0127 14:11:35.683582 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:35 crc kubenswrapper[4729]: I0127 14:11:35.684345 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" podUID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" containerName="controller-manager" containerID="cri-o://0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739" gracePeriod=30 Jan 27 14:11:35 crc kubenswrapper[4729]: I0127 14:11:35.708275 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:35 crc kubenswrapper[4729]: I0127 14:11:35.708821 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" podUID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" containerName="route-controller-manager" containerID="cri-o://3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c" gracePeriod=30 Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.207366 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.313273 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.326899 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca\") pod \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.326953 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config\") pod \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.327062 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f9cs\" (UniqueName: \"kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs\") pod \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.327103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert\") pod \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\" (UID: \"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.328251 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca" (OuterVolumeSpecName: "client-ca") pod "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" (UID: "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.328696 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config" (OuterVolumeSpecName: "config") pod "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" (UID: "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.333701 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs" (OuterVolumeSpecName: "kube-api-access-8f9cs") pod "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" (UID: "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c"). InnerVolumeSpecName "kube-api-access-8f9cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.333697 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" (UID: "4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.427789 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca\") pod \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.427858 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert\") pod \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.427910 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config\") pod \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428039 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles\") pod \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428068 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zt9z\" (UniqueName: \"kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z\") pod \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\" (UID: \"ef7aff78-2240-458c-b7d7-3b4f32e493d4\") " Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428333 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f9cs\" (UniqueName: \"kubernetes.io/projected/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-kube-api-access-8f9cs\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428360 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428373 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428385 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428769 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef7aff78-2240-458c-b7d7-3b4f32e493d4" (UID: "ef7aff78-2240-458c-b7d7-3b4f32e493d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.428781 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ef7aff78-2240-458c-b7d7-3b4f32e493d4" (UID: "ef7aff78-2240-458c-b7d7-3b4f32e493d4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.429345 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config" (OuterVolumeSpecName: "config") pod "ef7aff78-2240-458c-b7d7-3b4f32e493d4" (UID: "ef7aff78-2240-458c-b7d7-3b4f32e493d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.431542 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef7aff78-2240-458c-b7d7-3b4f32e493d4" (UID: "ef7aff78-2240-458c-b7d7-3b4f32e493d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.431569 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z" (OuterVolumeSpecName: "kube-api-access-7zt9z") pod "ef7aff78-2240-458c-b7d7-3b4f32e493d4" (UID: "ef7aff78-2240-458c-b7d7-3b4f32e493d4"). InnerVolumeSpecName "kube-api-access-7zt9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.448284 4729 generic.go:334] "Generic (PLEG): container finished" podID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" containerID="3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c" exitCode=0 Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.448327 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.448345 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" event={"ID":"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c","Type":"ContainerDied","Data":"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c"} Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.448462 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z" event={"ID":"4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c","Type":"ContainerDied","Data":"fb04c7b8780e8ca47d57f331741e5e744370d4bd2b2dbafa7cb90551db68738b"} Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.448483 4729 scope.go:117] "RemoveContainer" containerID="3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.451491 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" containerID="0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739" exitCode=0 Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.451537 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" event={"ID":"ef7aff78-2240-458c-b7d7-3b4f32e493d4","Type":"ContainerDied","Data":"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739"} Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.451568 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" event={"ID":"ef7aff78-2240-458c-b7d7-3b4f32e493d4","Type":"ContainerDied","Data":"c0ad16148b66874a74c95874ed908dbffb41882672c498f1e6958b388351f466"} Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.451926 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7579d7c77b-cxx7k" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.463594 4729 scope.go:117] "RemoveContainer" containerID="3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c" Jan 27 14:11:36 crc kubenswrapper[4729]: E0127 14:11:36.464866 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c\": container with ID starting with 3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c not found: ID does not exist" containerID="3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.464921 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c"} err="failed to get container status \"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c\": rpc error: code = NotFound desc = could not find container \"3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c\": container with ID starting with 3e3f2c708771ecb28596c3749639061df17ab600918f52dc0b99786cb669205c not found: ID does not exist" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.464948 4729 scope.go:117] "RemoveContainer" containerID="0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.478608 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.483626 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8569c87685-4x22z"] Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.488494 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.489955 4729 scope.go:117] "RemoveContainer" containerID="0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739" Jan 27 14:11:36 crc kubenswrapper[4729]: E0127 14:11:36.490483 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739\": container with ID starting with 0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739 not found: ID does not exist" containerID="0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.490517 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739"} err="failed to get container status \"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739\": rpc error: code = NotFound desc = could not find container \"0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739\": container with ID starting with 0e6f707b743605002e6b579644eecd5ce569ec136494a991a30eda6b9aa54739 not found: ID does not exist" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.492772 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7579d7c77b-cxx7k"] Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.529458 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.529741 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zt9z\" (UniqueName: \"kubernetes.io/projected/ef7aff78-2240-458c-b7d7-3b4f32e493d4-kube-api-access-7zt9z\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.529822 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.529952 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef7aff78-2240-458c-b7d7-3b4f32e493d4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:36 crc kubenswrapper[4729]: I0127 14:11:36.530016 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef7aff78-2240-458c-b7d7-3b4f32e493d4-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.141357 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:37 crc kubenswrapper[4729]: E0127 14:11:37.141807 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" containerName="route-controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.141840 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" containerName="route-controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: E0127 14:11:37.141859 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" containerName="controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.141908 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" containerName="controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.142146 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" containerName="controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.142185 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" containerName="route-controller-manager" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.142840 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.145148 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.145276 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.145617 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.145768 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.145961 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.146031 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.146057 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.146040 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.148928 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.149203 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.149315 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.149453 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.149894 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.151506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.159650 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.160996 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.166837 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.237924 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.237965 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238002 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cn7\" (UniqueName: \"kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238019 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238038 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238078 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238185 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ksp\" (UniqueName: \"kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238268 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.238304 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339367 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339405 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24cn7\" (UniqueName: \"kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339440 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339529 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339587 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ksp\" (UniqueName: \"kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339642 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.339688 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.340542 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.340835 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.340846 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.342341 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.342539 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.344325 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.344511 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.369298 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ksp\" (UniqueName: \"kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp\") pod \"controller-manager-6f89fbfcc-qq689\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.371211 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24cn7\" (UniqueName: \"kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7\") pod \"route-controller-manager-59fddb565-n8g6z\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.469257 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.492416 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.897143 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:37 crc kubenswrapper[4729]: W0127 14:11:37.907402 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod327baf12_9bfa_420c_afdc_307c9c246817.slice/crio-7cd64c74bc21e91ad536d5b24e85cd927dd769817a46f5ab72973c53d6565983 WatchSource:0}: Error finding container 7cd64c74bc21e91ad536d5b24e85cd927dd769817a46f5ab72973c53d6565983: Status 404 returned error can't find the container with id 7cd64c74bc21e91ad536d5b24e85cd927dd769817a46f5ab72973c53d6565983 Jan 27 14:11:37 crc kubenswrapper[4729]: W0127 14:11:37.910230 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e416e55_1f64_41a6_80e8_be4695919259.slice/crio-28dbebf07f10c6d1bf2b34e78e950863bd09528bda7be5ce8c0b4823af439223 WatchSource:0}: Error finding container 28dbebf07f10c6d1bf2b34e78e950863bd09528bda7be5ce8c0b4823af439223: Status 404 returned error can't find the container with id 28dbebf07f10c6d1bf2b34e78e950863bd09528bda7be5ce8c0b4823af439223 Jan 27 14:11:37 crc kubenswrapper[4729]: I0127 14:11:37.912617 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.056712 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c" path="/var/lib/kubelet/pods/4d56fe3a-8b4f-4174-ab29-2c0fb5fca61c/volumes" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.057465 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef7aff78-2240-458c-b7d7-3b4f32e493d4" path="/var/lib/kubelet/pods/ef7aff78-2240-458c-b7d7-3b4f32e493d4/volumes" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.465827 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" event={"ID":"327baf12-9bfa-420c-afdc-307c9c246817","Type":"ContainerStarted","Data":"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d"} Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.466275 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.466299 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" event={"ID":"327baf12-9bfa-420c-afdc-307c9c246817","Type":"ContainerStarted","Data":"7cd64c74bc21e91ad536d5b24e85cd927dd769817a46f5ab72973c53d6565983"} Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.468278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" event={"ID":"9e416e55-1f64-41a6-80e8-be4695919259","Type":"ContainerStarted","Data":"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814"} Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.468325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" event={"ID":"9e416e55-1f64-41a6-80e8-be4695919259","Type":"ContainerStarted","Data":"28dbebf07f10c6d1bf2b34e78e950863bd09528bda7be5ce8c0b4823af439223"} Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.468611 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.474608 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.498844 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" podStartSLOduration=3.4988241589999998 podStartE2EDuration="3.498824159s" podCreationTimestamp="2026-01-27 14:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:38.491016471 +0000 UTC m=+385.075207505" watchObservedRunningTime="2026-01-27 14:11:38.498824159 +0000 UTC m=+385.083015173" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.897799 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:38 crc kubenswrapper[4729]: I0127 14:11:38.915229 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" podStartSLOduration=3.915212484 podStartE2EDuration="3.915212484s" podCreationTimestamp="2026-01-27 14:11:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:38.518485748 +0000 UTC m=+385.102676752" watchObservedRunningTime="2026-01-27 14:11:38.915212484 +0000 UTC m=+385.499403488" Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.325586 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.654895 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.655270 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.655335 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.656072 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:11:52 crc kubenswrapper[4729]: I0127 14:11:52.656142 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0" gracePeriod=600 Jan 27 14:11:53 crc kubenswrapper[4729]: I0127 14:11:53.543667 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0" exitCode=0 Jan 27 14:11:53 crc kubenswrapper[4729]: I0127 14:11:53.543755 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0"} Jan 27 14:11:53 crc kubenswrapper[4729]: I0127 14:11:53.544157 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253"} Jan 27 14:11:53 crc kubenswrapper[4729]: I0127 14:11:53.544189 4729 scope.go:117] "RemoveContainer" containerID="bb6d3af5b1b6ae950dce225590efbd5e55a0fe036f829487b3d13d0ba21177fa" Jan 27 14:11:55 crc kubenswrapper[4729]: I0127 14:11:55.687228 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:55 crc kubenswrapper[4729]: I0127 14:11:55.687803 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" podUID="9e416e55-1f64-41a6-80e8-be4695919259" containerName="controller-manager" containerID="cri-o://c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814" gracePeriod=30 Jan 27 14:11:55 crc kubenswrapper[4729]: I0127 14:11:55.772774 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:55 crc kubenswrapper[4729]: I0127 14:11:55.773033 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" podUID="327baf12-9bfa-420c-afdc-307c9c246817" containerName="route-controller-manager" containerID="cri-o://c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d" gracePeriod=30 Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.255056 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.259497 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296594 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca\") pod \"327baf12-9bfa-420c-afdc-307c9c246817\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296632 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config\") pod \"327baf12-9bfa-420c-afdc-307c9c246817\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296666 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ksp\" (UniqueName: \"kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp\") pod \"9e416e55-1f64-41a6-80e8-be4695919259\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296714 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config\") pod \"9e416e55-1f64-41a6-80e8-be4695919259\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296737 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert\") pod \"9e416e55-1f64-41a6-80e8-be4695919259\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296758 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles\") pod \"9e416e55-1f64-41a6-80e8-be4695919259\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296804 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert\") pod \"327baf12-9bfa-420c-afdc-307c9c246817\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296823 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24cn7\" (UniqueName: \"kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7\") pod \"327baf12-9bfa-420c-afdc-307c9c246817\" (UID: \"327baf12-9bfa-420c-afdc-307c9c246817\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.296838 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca\") pod \"9e416e55-1f64-41a6-80e8-be4695919259\" (UID: \"9e416e55-1f64-41a6-80e8-be4695919259\") " Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.297304 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca" (OuterVolumeSpecName: "client-ca") pod "327baf12-9bfa-420c-afdc-307c9c246817" (UID: "327baf12-9bfa-420c-afdc-307c9c246817"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.297417 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config" (OuterVolumeSpecName: "config") pod "327baf12-9bfa-420c-afdc-307c9c246817" (UID: "327baf12-9bfa-420c-afdc-307c9c246817"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.298144 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9e416e55-1f64-41a6-80e8-be4695919259" (UID: "9e416e55-1f64-41a6-80e8-be4695919259"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.298202 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca" (OuterVolumeSpecName: "client-ca") pod "9e416e55-1f64-41a6-80e8-be4695919259" (UID: "9e416e55-1f64-41a6-80e8-be4695919259"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.298252 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config" (OuterVolumeSpecName: "config") pod "9e416e55-1f64-41a6-80e8-be4695919259" (UID: "9e416e55-1f64-41a6-80e8-be4695919259"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.301471 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp" (OuterVolumeSpecName: "kube-api-access-z8ksp") pod "9e416e55-1f64-41a6-80e8-be4695919259" (UID: "9e416e55-1f64-41a6-80e8-be4695919259"). InnerVolumeSpecName "kube-api-access-z8ksp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.301728 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9e416e55-1f64-41a6-80e8-be4695919259" (UID: "9e416e55-1f64-41a6-80e8-be4695919259"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.301818 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7" (OuterVolumeSpecName: "kube-api-access-24cn7") pod "327baf12-9bfa-420c-afdc-307c9c246817" (UID: "327baf12-9bfa-420c-afdc-307c9c246817"). InnerVolumeSpecName "kube-api-access-24cn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.302242 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "327baf12-9bfa-420c-afdc-307c9c246817" (UID: "327baf12-9bfa-420c-afdc-307c9c246817"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.397972 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/327baf12-9bfa-420c-afdc-307c9c246817-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398010 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398025 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24cn7\" (UniqueName: \"kubernetes.io/projected/327baf12-9bfa-420c-afdc-307c9c246817-kube-api-access-24cn7\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398038 4729 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398049 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/327baf12-9bfa-420c-afdc-307c9c246817-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398059 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ksp\" (UniqueName: \"kubernetes.io/projected/9e416e55-1f64-41a6-80e8-be4695919259-kube-api-access-z8ksp\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398068 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398078 4729 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e416e55-1f64-41a6-80e8-be4695919259-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.398087 4729 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9e416e55-1f64-41a6-80e8-be4695919259-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.565784 4729 generic.go:334] "Generic (PLEG): container finished" podID="327baf12-9bfa-420c-afdc-307c9c246817" containerID="c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d" exitCode=0 Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.565899 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" event={"ID":"327baf12-9bfa-420c-afdc-307c9c246817","Type":"ContainerDied","Data":"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d"} Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.565937 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" event={"ID":"327baf12-9bfa-420c-afdc-307c9c246817","Type":"ContainerDied","Data":"7cd64c74bc21e91ad536d5b24e85cd927dd769817a46f5ab72973c53d6565983"} Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.565972 4729 scope.go:117] "RemoveContainer" containerID="c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.566144 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.568074 4729 generic.go:334] "Generic (PLEG): container finished" podID="9e416e55-1f64-41a6-80e8-be4695919259" containerID="c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814" exitCode=0 Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.568116 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" event={"ID":"9e416e55-1f64-41a6-80e8-be4695919259","Type":"ContainerDied","Data":"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814"} Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.568140 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" event={"ID":"9e416e55-1f64-41a6-80e8-be4695919259","Type":"ContainerDied","Data":"28dbebf07f10c6d1bf2b34e78e950863bd09528bda7be5ce8c0b4823af439223"} Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.568185 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f89fbfcc-qq689" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.589729 4729 scope.go:117] "RemoveContainer" containerID="c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d" Jan 27 14:11:56 crc kubenswrapper[4729]: E0127 14:11:56.591211 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d\": container with ID starting with c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d not found: ID does not exist" containerID="c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.591268 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d"} err="failed to get container status \"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d\": rpc error: code = NotFound desc = could not find container \"c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d\": container with ID starting with c03b61c1ccc912fd2f58fe78b408b64cf77394261f82c2fa0a2a6fc92590a44d not found: ID does not exist" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.591305 4729 scope.go:117] "RemoveContainer" containerID="c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.601623 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.606642 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59fddb565-n8g6z"] Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.615106 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.617350 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f89fbfcc-qq689"] Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.617660 4729 scope.go:117] "RemoveContainer" containerID="c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814" Jan 27 14:11:56 crc kubenswrapper[4729]: E0127 14:11:56.618111 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814\": container with ID starting with c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814 not found: ID does not exist" containerID="c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814" Jan 27 14:11:56 crc kubenswrapper[4729]: I0127 14:11:56.618165 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814"} err="failed to get container status \"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814\": rpc error: code = NotFound desc = could not find container \"c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814\": container with ID starting with c3f5e4bb84258179c68bb335b8b5674a5fd53a4d26c8814d9d5fbeb12188f814 not found: ID does not exist" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.176145 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849f748766-gssnb"] Jan 27 14:11:57 crc kubenswrapper[4729]: E0127 14:11:57.176501 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e416e55-1f64-41a6-80e8-be4695919259" containerName="controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.176522 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e416e55-1f64-41a6-80e8-be4695919259" containerName="controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: E0127 14:11:57.176547 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="327baf12-9bfa-420c-afdc-307c9c246817" containerName="route-controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.176562 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="327baf12-9bfa-420c-afdc-307c9c246817" containerName="route-controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.176738 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="327baf12-9bfa-420c-afdc-307c9c246817" containerName="route-controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.176765 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e416e55-1f64-41a6-80e8-be4695919259" containerName="controller-manager" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.177431 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.179772 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.180464 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b8698c4c9-b9h2n"] Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.181603 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.181738 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.181923 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.182724 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.182907 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.183007 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.185506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.187018 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849f748766-gssnb"] Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.187214 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.187421 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.188405 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.188599 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.189553 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.191412 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b8698c4c9-b9h2n"] Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.195909 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.208292 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-client-ca\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.208332 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmjd\" (UniqueName: \"kubernetes.io/projected/bc31ccea-7193-473e-9f6c-5a61d6937365-kube-api-access-4pmjd\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.208359 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc31ccea-7193-473e-9f6c-5a61d6937365-serving-cert\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.208386 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsbbm\" (UniqueName: \"kubernetes.io/projected/d0b8aa18-25ff-4ac6-8df1-187f271909ac-kube-api-access-qsbbm\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.208407 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-client-ca\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.209276 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-config\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.209327 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-config\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.209364 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-proxy-ca-bundles\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.209439 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b8aa18-25ff-4ac6-8df1-187f271909ac-serving-cert\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.310839 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b8aa18-25ff-4ac6-8df1-187f271909ac-serving-cert\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.310921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-client-ca\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.310940 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pmjd\" (UniqueName: \"kubernetes.io/projected/bc31ccea-7193-473e-9f6c-5a61d6937365-kube-api-access-4pmjd\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.310961 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc31ccea-7193-473e-9f6c-5a61d6937365-serving-cert\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.310994 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsbbm\" (UniqueName: \"kubernetes.io/projected/d0b8aa18-25ff-4ac6-8df1-187f271909ac-kube-api-access-qsbbm\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.311012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-client-ca\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.311030 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-config\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.311048 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-config\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.311069 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-proxy-ca-bundles\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.312523 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-client-ca\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.312536 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-proxy-ca-bundles\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.313361 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b8aa18-25ff-4ac6-8df1-187f271909ac-config\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.313904 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-client-ca\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.314068 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc31ccea-7193-473e-9f6c-5a61d6937365-config\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.316580 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b8aa18-25ff-4ac6-8df1-187f271909ac-serving-cert\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.317138 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc31ccea-7193-473e-9f6c-5a61d6937365-serving-cert\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.327865 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsbbm\" (UniqueName: \"kubernetes.io/projected/d0b8aa18-25ff-4ac6-8df1-187f271909ac-kube-api-access-qsbbm\") pod \"route-controller-manager-849f748766-gssnb\" (UID: \"d0b8aa18-25ff-4ac6-8df1-187f271909ac\") " pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.328182 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pmjd\" (UniqueName: \"kubernetes.io/projected/bc31ccea-7193-473e-9f6c-5a61d6937365-kube-api-access-4pmjd\") pod \"controller-manager-b8698c4c9-b9h2n\" (UID: \"bc31ccea-7193-473e-9f6c-5a61d6937365\") " pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.501127 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.508865 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.906943 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849f748766-gssnb"] Jan 27 14:11:57 crc kubenswrapper[4729]: I0127 14:11:57.957537 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b8698c4c9-b9h2n"] Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.059001 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="327baf12-9bfa-420c-afdc-307c9c246817" path="/var/lib/kubelet/pods/327baf12-9bfa-420c-afdc-307c9c246817/volumes" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.060410 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e416e55-1f64-41a6-80e8-be4695919259" path="/var/lib/kubelet/pods/9e416e55-1f64-41a6-80e8-be4695919259/volumes" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.583720 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" event={"ID":"bc31ccea-7193-473e-9f6c-5a61d6937365","Type":"ContainerStarted","Data":"1194e0307cc6ffa65389cf1bf393cb6f3cf89a29815a3e6871d63570491f06f9"} Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.583765 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" event={"ID":"bc31ccea-7193-473e-9f6c-5a61d6937365","Type":"ContainerStarted","Data":"ba2a42654f04513ae98d650b28a3aab3819198e76bcdddbfa800d6791d0b1e97"} Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.584058 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.585315 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" event={"ID":"d0b8aa18-25ff-4ac6-8df1-187f271909ac","Type":"ContainerStarted","Data":"6665941671ab6729293f3e9329d1d37d0cab26539be0f9968cf16b5d157ae880"} Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.585616 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.585630 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" event={"ID":"d0b8aa18-25ff-4ac6-8df1-187f271909ac","Type":"ContainerStarted","Data":"f16e5be68966f5f6b7e008c008311707eb15a4c6c9d8a0d3a36104fda80960bf"} Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.589686 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.591520 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.603445 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b8698c4c9-b9h2n" podStartSLOduration=3.6034278349999997 podStartE2EDuration="3.603427835s" podCreationTimestamp="2026-01-27 14:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:58.600905658 +0000 UTC m=+405.185096682" watchObservedRunningTime="2026-01-27 14:11:58.603427835 +0000 UTC m=+405.187618859" Jan 27 14:11:58 crc kubenswrapper[4729]: I0127 14:11:58.646051 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-849f748766-gssnb" podStartSLOduration=3.6460358409999998 podStartE2EDuration="3.646035841s" podCreationTimestamp="2026-01-27 14:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:11:58.621131483 +0000 UTC m=+405.205322517" watchObservedRunningTime="2026-01-27 14:11:58.646035841 +0000 UTC m=+405.230226855" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.359684 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" podUID="cb22ecac-005d-414f-928c-5714be9f7596" containerName="oauth-openshift" containerID="cri-o://df4a4c19caf4dddd9cb6db2689c4d167040642247d15e3eb1109b7a0c3bb2c43" gracePeriod=15 Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.688058 4729 generic.go:334] "Generic (PLEG): container finished" podID="cb22ecac-005d-414f-928c-5714be9f7596" containerID="df4a4c19caf4dddd9cb6db2689c4d167040642247d15e3eb1109b7a0c3bb2c43" exitCode=0 Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.688114 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" event={"ID":"cb22ecac-005d-414f-928c-5714be9f7596","Type":"ContainerDied","Data":"df4a4c19caf4dddd9cb6db2689c4d167040642247d15e3eb1109b7a0c3bb2c43"} Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.849788 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875652 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twlhw\" (UniqueName: \"kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875738 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875763 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875789 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875806 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875843 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875894 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.875929 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876004 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876049 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876081 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876123 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876198 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.876224 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data\") pod \"cb22ecac-005d-414f-928c-5714be9f7596\" (UID: \"cb22ecac-005d-414f-928c-5714be9f7596\") " Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.877376 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.879283 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.879294 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.879396 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.879926 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.891340 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw" (OuterVolumeSpecName: "kube-api-access-twlhw") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "kube-api-access-twlhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.893332 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-84cb85b76-ggwsl"] Jan 27 14:12:17 crc kubenswrapper[4729]: E0127 14:12:17.893575 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb22ecac-005d-414f-928c-5714be9f7596" containerName="oauth-openshift" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.893619 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb22ecac-005d-414f-928c-5714be9f7596" containerName="oauth-openshift" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.893702 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb22ecac-005d-414f-928c-5714be9f7596" containerName="oauth-openshift" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.894108 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.904219 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cb85b76-ggwsl"] Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.904299 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.904646 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.907715 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.908044 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.908282 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.908515 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.910224 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.910799 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "cb22ecac-005d-414f-928c-5714be9f7596" (UID: "cb22ecac-005d-414f-928c-5714be9f7596"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978606 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978679 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-error\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978700 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-dir\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978770 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978850 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-policies\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978902 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-login\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978921 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.978940 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979105 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979210 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979270 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-session\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979345 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6nxt\" (UniqueName: \"kubernetes.io/projected/8c74ec90-7840-499f-99d6-afd599d3dcf1-kube-api-access-m6nxt\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979404 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979559 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979576 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979611 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twlhw\" (UniqueName: \"kubernetes.io/projected/cb22ecac-005d-414f-928c-5714be9f7596-kube-api-access-twlhw\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979623 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979637 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979651 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979662 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979674 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979686 4729 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb22ecac-005d-414f-928c-5714be9f7596-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979695 4729 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979706 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979718 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979731 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:17 crc kubenswrapper[4729]: I0127 14:12:17.979742 4729 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cb22ecac-005d-414f-928c-5714be9f7596-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.080986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-policies\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081040 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-login\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081081 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081101 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081119 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081142 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-session\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081164 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6nxt\" (UniqueName: \"kubernetes.io/projected/8c74ec90-7840-499f-99d6-afd599d3dcf1-kube-api-access-m6nxt\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081181 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-error\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081221 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081236 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-dir\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081255 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081285 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.081746 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-dir\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.083381 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.083497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.083592 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-audit-policies\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.083756 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-service-ca\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.084632 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-error\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.084812 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.084917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-router-certs\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.086113 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-session\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.086475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-template-login\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.086798 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.087981 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.088780 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c74ec90-7840-499f-99d6-afd599d3dcf1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.098606 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6nxt\" (UniqueName: \"kubernetes.io/projected/8c74ec90-7840-499f-99d6-afd599d3dcf1-kube-api-access-m6nxt\") pod \"oauth-openshift-84cb85b76-ggwsl\" (UID: \"8c74ec90-7840-499f-99d6-afd599d3dcf1\") " pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.245352 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.675805 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-84cb85b76-ggwsl"] Jan 27 14:12:18 crc kubenswrapper[4729]: W0127 14:12:18.683010 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c74ec90_7840_499f_99d6_afd599d3dcf1.slice/crio-9b28a4d9aeebc333be24a72f805b1cecd23df95110a68d178d9f89fbf4803144 WatchSource:0}: Error finding container 9b28a4d9aeebc333be24a72f805b1cecd23df95110a68d178d9f89fbf4803144: Status 404 returned error can't find the container with id 9b28a4d9aeebc333be24a72f805b1cecd23df95110a68d178d9f89fbf4803144 Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.701586 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" event={"ID":"8c74ec90-7840-499f-99d6-afd599d3dcf1","Type":"ContainerStarted","Data":"9b28a4d9aeebc333be24a72f805b1cecd23df95110a68d178d9f89fbf4803144"} Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.703919 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" event={"ID":"cb22ecac-005d-414f-928c-5714be9f7596","Type":"ContainerDied","Data":"896773e02dc9b8e4d58c2490a987da1ae4744c09f49ad3b63350ef537721fe98"} Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.703970 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-89l7c" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.703980 4729 scope.go:117] "RemoveContainer" containerID="df4a4c19caf4dddd9cb6db2689c4d167040642247d15e3eb1109b7a0c3bb2c43" Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.723428 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:12:18 crc kubenswrapper[4729]: I0127 14:12:18.733631 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-89l7c"] Jan 27 14:12:19 crc kubenswrapper[4729]: I0127 14:12:19.714982 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" event={"ID":"8c74ec90-7840-499f-99d6-afd599d3dcf1","Type":"ContainerStarted","Data":"3a9c59cc6a227b7be7f6118678d78fc67ea56a09ce5b03ee84c10cbee36ffb04"} Jan 27 14:12:19 crc kubenswrapper[4729]: I0127 14:12:19.715362 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:19 crc kubenswrapper[4729]: I0127 14:12:19.723258 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" Jan 27 14:12:19 crc kubenswrapper[4729]: I0127 14:12:19.749933 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-84cb85b76-ggwsl" podStartSLOduration=27.749861014 podStartE2EDuration="27.749861014s" podCreationTimestamp="2026-01-27 14:11:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:12:19.744742818 +0000 UTC m=+426.328933882" watchObservedRunningTime="2026-01-27 14:12:19.749861014 +0000 UTC m=+426.334052088" Jan 27 14:12:20 crc kubenswrapper[4729]: I0127 14:12:20.059068 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb22ecac-005d-414f-928c-5714be9f7596" path="/var/lib/kubelet/pods/cb22ecac-005d-414f-928c-5714be9f7596/volumes" Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.905959 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.906894 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dpvk8" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="registry-server" containerID="cri-o://185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4" gracePeriod=30 Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.918484 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.918787 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tgstt" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="registry-server" containerID="cri-o://140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260" gracePeriod=30 Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.926478 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.926721 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" containerID="cri-o://daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850" gracePeriod=30 Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.941668 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.942038 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gf4lk" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="registry-server" containerID="cri-o://ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf" gracePeriod=30 Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.951854 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c26vz"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.952596 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.959693 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.960044 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2b2hj" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="registry-server" containerID="cri-o://9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3" gracePeriod=30 Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.974796 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c26vz"] Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.995667 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.995732 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl27r\" (UniqueName: \"kubernetes.io/projected/cf7bbeaf-d788-4a89-94f5-af01034515c5-kube-api-access-dl27r\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:34 crc kubenswrapper[4729]: I0127 14:12:34.995767 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.099521 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.099661 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.099724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl27r\" (UniqueName: \"kubernetes.io/projected/cf7bbeaf-d788-4a89-94f5-af01034515c5-kube-api-access-dl27r\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.101621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.105564 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cf7bbeaf-d788-4a89-94f5-af01034515c5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.122937 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl27r\" (UniqueName: \"kubernetes.io/projected/cf7bbeaf-d788-4a89-94f5-af01034515c5-kube-api-access-dl27r\") pod \"marketplace-operator-79b997595-c26vz\" (UID: \"cf7bbeaf-d788-4a89-94f5-af01034515c5\") " pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.402694 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.426089 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.503233 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lth7\" (UniqueName: \"kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7\") pod \"ade27118-861e-4da6-9a5e-600cfbef607f\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.503291 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities\") pod \"ade27118-861e-4da6-9a5e-600cfbef607f\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.503318 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content\") pod \"ade27118-861e-4da6-9a5e-600cfbef607f\" (UID: \"ade27118-861e-4da6-9a5e-600cfbef607f\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.505164 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities" (OuterVolumeSpecName: "utilities") pod "ade27118-861e-4da6-9a5e-600cfbef607f" (UID: "ade27118-861e-4da6-9a5e-600cfbef607f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.507010 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7" (OuterVolumeSpecName: "kube-api-access-4lth7") pod "ade27118-861e-4da6-9a5e-600cfbef607f" (UID: "ade27118-861e-4da6-9a5e-600cfbef607f"). InnerVolumeSpecName "kube-api-access-4lth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.535911 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.542177 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.557041 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ade27118-861e-4da6-9a5e-600cfbef607f" (UID: "ade27118-861e-4da6-9a5e-600cfbef607f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.557832 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.575749 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.604807 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities\") pod \"4c60b76f-4f77-4591-9589-815de0bf6047\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.604868 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities\") pod \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.604929 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") pod \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.605191 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content\") pod \"1cbbed75-f666-4324-be28-902bb6564058\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.605459 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvfw2\" (UniqueName: \"kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2\") pod \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.605512 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content\") pod \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\" (UID: \"fd9faf32-c248-4421-bbf6-66ec8b28dbc7\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.605555 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities\") pod \"1cbbed75-f666-4324-be28-902bb6564058\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.606961 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f6mf\" (UniqueName: \"kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf\") pod \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607004 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2khb\" (UniqueName: \"kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb\") pod \"4c60b76f-4f77-4591-9589-815de0bf6047\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607026 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxw9\" (UniqueName: \"kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9\") pod \"1cbbed75-f666-4324-be28-902bb6564058\" (UID: \"1cbbed75-f666-4324-be28-902bb6564058\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607059 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") pod \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\" (UID: \"16dd44fa-b221-497c-a9fa-7dcf08359ab1\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607210 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content\") pod \"4c60b76f-4f77-4591-9589-815de0bf6047\" (UID: \"4c60b76f-4f77-4591-9589-815de0bf6047\") " Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607584 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lth7\" (UniqueName: \"kubernetes.io/projected/ade27118-861e-4da6-9a5e-600cfbef607f-kube-api-access-4lth7\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607601 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607612 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ade27118-861e-4da6-9a5e-600cfbef607f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities" (OuterVolumeSpecName: "utilities") pod "4c60b76f-4f77-4591-9589-815de0bf6047" (UID: "4c60b76f-4f77-4591-9589-815de0bf6047"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.607957 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities" (OuterVolumeSpecName: "utilities") pod "1cbbed75-f666-4324-be28-902bb6564058" (UID: "1cbbed75-f666-4324-be28-902bb6564058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.608716 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "16dd44fa-b221-497c-a9fa-7dcf08359ab1" (UID: "16dd44fa-b221-497c-a9fa-7dcf08359ab1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.613816 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities" (OuterVolumeSpecName: "utilities") pod "fd9faf32-c248-4421-bbf6-66ec8b28dbc7" (UID: "fd9faf32-c248-4421-bbf6-66ec8b28dbc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.614485 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9" (OuterVolumeSpecName: "kube-api-access-6zxw9") pod "1cbbed75-f666-4324-be28-902bb6564058" (UID: "1cbbed75-f666-4324-be28-902bb6564058"). InnerVolumeSpecName "kube-api-access-6zxw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.615382 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2" (OuterVolumeSpecName: "kube-api-access-gvfw2") pod "fd9faf32-c248-4421-bbf6-66ec8b28dbc7" (UID: "fd9faf32-c248-4421-bbf6-66ec8b28dbc7"). InnerVolumeSpecName "kube-api-access-gvfw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.618153 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf" (OuterVolumeSpecName: "kube-api-access-5f6mf") pod "16dd44fa-b221-497c-a9fa-7dcf08359ab1" (UID: "16dd44fa-b221-497c-a9fa-7dcf08359ab1"). InnerVolumeSpecName "kube-api-access-5f6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.621564 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb" (OuterVolumeSpecName: "kube-api-access-p2khb") pod "4c60b76f-4f77-4591-9589-815de0bf6047" (UID: "4c60b76f-4f77-4591-9589-815de0bf6047"). InnerVolumeSpecName "kube-api-access-p2khb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.622079 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "16dd44fa-b221-497c-a9fa-7dcf08359ab1" (UID: "16dd44fa-b221-497c-a9fa-7dcf08359ab1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.638606 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd9faf32-c248-4421-bbf6-66ec8b28dbc7" (UID: "fd9faf32-c248-4421-bbf6-66ec8b28dbc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.669389 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cbbed75-f666-4324-be28-902bb6564058" (UID: "1cbbed75-f666-4324-be28-902bb6564058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708642 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f6mf\" (UniqueName: \"kubernetes.io/projected/16dd44fa-b221-497c-a9fa-7dcf08359ab1-kube-api-access-5f6mf\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708671 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2khb\" (UniqueName: \"kubernetes.io/projected/4c60b76f-4f77-4591-9589-815de0bf6047-kube-api-access-p2khb\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708681 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zxw9\" (UniqueName: \"kubernetes.io/projected/1cbbed75-f666-4324-be28-902bb6564058-kube-api-access-6zxw9\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708690 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708699 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708709 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708721 4729 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16dd44fa-b221-497c-a9fa-7dcf08359ab1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708730 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708740 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvfw2\" (UniqueName: \"kubernetes.io/projected/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-kube-api-access-gvfw2\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708751 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd9faf32-c248-4421-bbf6-66ec8b28dbc7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.708760 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cbbed75-f666-4324-be28-902bb6564058-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.760240 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c60b76f-4f77-4591-9589-815de0bf6047" (UID: "4c60b76f-4f77-4591-9589-815de0bf6047"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.803460 4729 generic.go:334] "Generic (PLEG): container finished" podID="ade27118-861e-4da6-9a5e-600cfbef607f" containerID="185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4" exitCode=0 Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.803583 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpvk8" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.804056 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerDied","Data":"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.804135 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpvk8" event={"ID":"ade27118-861e-4da6-9a5e-600cfbef607f","Type":"ContainerDied","Data":"338a660165475b32a01778d7cecaf258d21f812712654caef2cbdfe012310593"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.804205 4729 scope.go:117] "RemoveContainer" containerID="185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.805846 4729 generic.go:334] "Generic (PLEG): container finished" podID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerID="daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850" exitCode=0 Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.805945 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.805999 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" event={"ID":"16dd44fa-b221-497c-a9fa-7dcf08359ab1","Type":"ContainerDied","Data":"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.806042 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" event={"ID":"16dd44fa-b221-497c-a9fa-7dcf08359ab1","Type":"ContainerDied","Data":"7b1c1cc4222bae6249b12806564d20257db53fc8d6d1f728b4ee136a21ce5a93"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.810267 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c60b76f-4f77-4591-9589-815de0bf6047-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.811065 4729 generic.go:334] "Generic (PLEG): container finished" podID="1cbbed75-f666-4324-be28-902bb6564058" containerID="140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260" exitCode=0 Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.811164 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerDied","Data":"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.811565 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tgstt" event={"ID":"1cbbed75-f666-4324-be28-902bb6564058","Type":"ContainerDied","Data":"b1801c471a77aeb58219589a8eaf13d34a6787a27a22c7040bb0e0e438cd4f64"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.811198 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tgstt" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.817093 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerID="ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf" exitCode=0 Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.817243 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerDied","Data":"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.817334 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gf4lk" event={"ID":"fd9faf32-c248-4421-bbf6-66ec8b28dbc7","Type":"ContainerDied","Data":"43d639436b196d701bf0784895a0c198335629016853d468af1048f8a3d2671f"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.817485 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gf4lk" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.822204 4729 generic.go:334] "Generic (PLEG): container finished" podID="4c60b76f-4f77-4591-9589-815de0bf6047" containerID="9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3" exitCode=0 Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.822240 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerDied","Data":"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.822266 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2b2hj" event={"ID":"4c60b76f-4f77-4591-9589-815de0bf6047","Type":"ContainerDied","Data":"18de0809bd331ab19e67f73ee396425294fd8b4f862293eca7328c068ceeea95"} Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.822315 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2b2hj" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.832568 4729 scope.go:117] "RemoveContainer" containerID="bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.840705 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.853505 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d8548"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.865924 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.870990 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dpvk8"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.878399 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c26vz"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.879378 4729 scope.go:117] "RemoveContainer" containerID="513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.881394 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.885775 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tgstt"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.891983 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.894996 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gf4lk"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.908827 4729 scope.go:117] "RemoveContainer" containerID="185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.909119 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.909300 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4\": container with ID starting with 185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4 not found: ID does not exist" containerID="185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.909329 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4"} err="failed to get container status \"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4\": rpc error: code = NotFound desc = could not find container \"185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4\": container with ID starting with 185181f263aa276eaa745182ad5214c9efdb603af447b569a23a29609d896fc4 not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.909350 4729 scope.go:117] "RemoveContainer" containerID="bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.909732 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5\": container with ID starting with bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5 not found: ID does not exist" containerID="bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.909778 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5"} err="failed to get container status \"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5\": rpc error: code = NotFound desc = could not find container \"bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5\": container with ID starting with bbb9e9a859c858454dd005b1e1b1953697f31bf3a548ceb4bd538525394e69d5 not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.909810 4729 scope.go:117] "RemoveContainer" containerID="513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.910197 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d\": container with ID starting with 513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d not found: ID does not exist" containerID="513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.910242 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d"} err="failed to get container status \"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d\": rpc error: code = NotFound desc = could not find container \"513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d\": container with ID starting with 513c6ae57abdd77c52812c4224773f1909fbc8ccfb313592c1c0dc071fe5017d not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.910271 4729 scope.go:117] "RemoveContainer" containerID="daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.914179 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2b2hj"] Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.923005 4729 scope.go:117] "RemoveContainer" containerID="daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.923377 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850\": container with ID starting with daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850 not found: ID does not exist" containerID="daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.923404 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850"} err="failed to get container status \"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850\": rpc error: code = NotFound desc = could not find container \"daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850\": container with ID starting with daa9f4a2d558ea26e0becd8b00b6efd90f9071be0699439be86743f0a21f9850 not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.923424 4729 scope.go:117] "RemoveContainer" containerID="140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.939368 4729 scope.go:117] "RemoveContainer" containerID="7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.963627 4729 scope.go:117] "RemoveContainer" containerID="e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.979328 4729 scope.go:117] "RemoveContainer" containerID="140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.979770 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260\": container with ID starting with 140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260 not found: ID does not exist" containerID="140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.979799 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260"} err="failed to get container status \"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260\": rpc error: code = NotFound desc = could not find container \"140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260\": container with ID starting with 140dab587064ad9e11c17f7a67c0f3afab8077c063d90ec126f4275d5d286260 not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.979824 4729 scope.go:117] "RemoveContainer" containerID="7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.980146 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6\": container with ID starting with 7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6 not found: ID does not exist" containerID="7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.980195 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6"} err="failed to get container status \"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6\": rpc error: code = NotFound desc = could not find container \"7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6\": container with ID starting with 7202e27060609b37f99dd3963882006666852e6bff068ece4ae60874d81db5c6 not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.980224 4729 scope.go:117] "RemoveContainer" containerID="e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb" Jan 27 14:12:35 crc kubenswrapper[4729]: E0127 14:12:35.980554 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb\": container with ID starting with e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb not found: ID does not exist" containerID="e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.980580 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb"} err="failed to get container status \"e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb\": rpc error: code = NotFound desc = could not find container \"e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb\": container with ID starting with e6d4f468eb835ace2b2676a2243900800b09eb43231c519f18f908856793b0bb not found: ID does not exist" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.980595 4729 scope.go:117] "RemoveContainer" containerID="ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf" Jan 27 14:12:35 crc kubenswrapper[4729]: I0127 14:12:35.993389 4729 scope.go:117] "RemoveContainer" containerID="bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.004759 4729 scope.go:117] "RemoveContainer" containerID="27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.018667 4729 scope.go:117] "RemoveContainer" containerID="ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.019155 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf\": container with ID starting with ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf not found: ID does not exist" containerID="ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.019248 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf"} err="failed to get container status \"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf\": rpc error: code = NotFound desc = could not find container \"ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf\": container with ID starting with ed35e18b82d035f43b708933bde5a5ae65564e6e6065b7dfea5c4e8411060bbf not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.019275 4729 scope.go:117] "RemoveContainer" containerID="bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.019820 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941\": container with ID starting with bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941 not found: ID does not exist" containerID="bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.019853 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941"} err="failed to get container status \"bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941\": rpc error: code = NotFound desc = could not find container \"bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941\": container with ID starting with bcc781b17a496c09f93ef8c6decccbb1c870e36bc63e99962e2e832232017941 not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.019937 4729 scope.go:117] "RemoveContainer" containerID="27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.020184 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d\": container with ID starting with 27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d not found: ID does not exist" containerID="27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.020211 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d"} err="failed to get container status \"27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d\": rpc error: code = NotFound desc = could not find container \"27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d\": container with ID starting with 27502df7071d3361404b1704aceca889287f4aa6e82e9ea7cc4212333019ff6d not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.020228 4729 scope.go:117] "RemoveContainer" containerID="9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.033455 4729 scope.go:117] "RemoveContainer" containerID="b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.061367 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" path="/var/lib/kubelet/pods/16dd44fa-b221-497c-a9fa-7dcf08359ab1/volumes" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.062155 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cbbed75-f666-4324-be28-902bb6564058" path="/var/lib/kubelet/pods/1cbbed75-f666-4324-be28-902bb6564058/volumes" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.063148 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" path="/var/lib/kubelet/pods/4c60b76f-4f77-4591-9589-815de0bf6047/volumes" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.064463 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" path="/var/lib/kubelet/pods/ade27118-861e-4da6-9a5e-600cfbef607f/volumes" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.065176 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" path="/var/lib/kubelet/pods/fd9faf32-c248-4421-bbf6-66ec8b28dbc7/volumes" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.090472 4729 scope.go:117] "RemoveContainer" containerID="f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.112105 4729 scope.go:117] "RemoveContainer" containerID="9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.112980 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3\": container with ID starting with 9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3 not found: ID does not exist" containerID="9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.113031 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3"} err="failed to get container status \"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3\": rpc error: code = NotFound desc = could not find container \"9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3\": container with ID starting with 9b6f670c1819e652784765f602d111f3975c3e7b53f601e48715c80ba50daab3 not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.113057 4729 scope.go:117] "RemoveContainer" containerID="b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.114054 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3\": container with ID starting with b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3 not found: ID does not exist" containerID="b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.114079 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3"} err="failed to get container status \"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3\": rpc error: code = NotFound desc = could not find container \"b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3\": container with ID starting with b7b69b9f67f424f908052c44ce6e9e05b30369b0e54fe182ddff048e5d6628f3 not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.114094 4729 scope.go:117] "RemoveContainer" containerID="f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931" Jan 27 14:12:36 crc kubenswrapper[4729]: E0127 14:12:36.114422 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931\": container with ID starting with f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931 not found: ID does not exist" containerID="f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.114454 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931"} err="failed to get container status \"f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931\": rpc error: code = NotFound desc = could not find container \"f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931\": container with ID starting with f0096c1a8ded679e3e7ded309e15119e3c2c464c56ed7b22c9b0ac5d06601931 not found: ID does not exist" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.519568 4729 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d8548 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.519931 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d8548" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.834064 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" event={"ID":"cf7bbeaf-d788-4a89-94f5-af01034515c5","Type":"ContainerStarted","Data":"79eba4d6ff029507f90dffe6df0976d69a99c903b504ffc4ff834426a5032780"} Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.834116 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" event={"ID":"cf7bbeaf-d788-4a89-94f5-af01034515c5","Type":"ContainerStarted","Data":"4984bcad41fd1bfc38dbbc04a20543ea5c12830b1d885e8f8cd0ea1bf577d80e"} Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.834335 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.837957 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" Jan 27 14:12:36 crc kubenswrapper[4729]: I0127 14:12:36.856101 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c26vz" podStartSLOduration=2.856081524 podStartE2EDuration="2.856081524s" podCreationTimestamp="2026-01-27 14:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:12:36.851606346 +0000 UTC m=+443.435797350" watchObservedRunningTime="2026-01-27 14:12:36.856081524 +0000 UTC m=+443.440272528" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126145 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlhr9"] Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126382 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126393 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126418 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126423 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126432 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126438 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126444 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126449 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126457 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126462 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126470 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126490 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126497 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126503 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126511 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126516 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="extract-utilities" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126524 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126529 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126541 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126546 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="extract-content" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126567 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126573 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126581 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126586 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: E0127 14:12:37.126594 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126601 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126743 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbbed75-f666-4324-be28-902bb6564058" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126760 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c60b76f-4f77-4591-9589-815de0bf6047" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126773 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="16dd44fa-b221-497c-a9fa-7dcf08359ab1" containerName="marketplace-operator" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126782 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade27118-861e-4da6-9a5e-600cfbef607f" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.126791 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9faf32-c248-4421-bbf6-66ec8b28dbc7" containerName="registry-server" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.127867 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.131898 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.139824 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlhr9"] Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.225669 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-catalog-content\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.225755 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-utilities\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.225932 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-654zh\" (UniqueName: \"kubernetes.io/projected/f11add35-16a1-4182-92bb-55f9144ffe2a-kube-api-access-654zh\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.318516 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.320126 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.323327 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.326821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-utilities\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.326914 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-654zh\" (UniqueName: \"kubernetes.io/projected/f11add35-16a1-4182-92bb-55f9144ffe2a-kube-api-access-654zh\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.326970 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-catalog-content\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.327360 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-utilities\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.327382 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f11add35-16a1-4182-92bb-55f9144ffe2a-catalog-content\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.336290 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.362354 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-654zh\" (UniqueName: \"kubernetes.io/projected/f11add35-16a1-4182-92bb-55f9144ffe2a-kube-api-access-654zh\") pod \"redhat-marketplace-wlhr9\" (UID: \"f11add35-16a1-4182-92bb-55f9144ffe2a\") " pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.427989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.428070 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zpqr\" (UniqueName: \"kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.428134 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.448801 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.531473 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.531994 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.532019 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zpqr\" (UniqueName: \"kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.532118 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.532342 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.550982 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zpqr\" (UniqueName: \"kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr\") pod \"certified-operators-mkj77\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.635448 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.820169 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlhr9"] Jan 27 14:12:37 crc kubenswrapper[4729]: W0127 14:12:37.825785 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11add35_16a1_4182_92bb_55f9144ffe2a.slice/crio-9037768eaebb42d3a6cfefdb814b8d173daec68f1a250809092a115f7275f537 WatchSource:0}: Error finding container 9037768eaebb42d3a6cfefdb814b8d173daec68f1a250809092a115f7275f537: Status 404 returned error can't find the container with id 9037768eaebb42d3a6cfefdb814b8d173daec68f1a250809092a115f7275f537 Jan 27 14:12:37 crc kubenswrapper[4729]: I0127 14:12:37.839357 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlhr9" event={"ID":"f11add35-16a1-4182-92bb-55f9144ffe2a","Type":"ContainerStarted","Data":"9037768eaebb42d3a6cfefdb814b8d173daec68f1a250809092a115f7275f537"} Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.014734 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:12:38 crc kubenswrapper[4729]: W0127 14:12:38.019352 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8791c2ee_d19b_4208_b783_7de3eab67cad.slice/crio-d606d103cebf7bcbca16e7ccab5c3b3e5c880f0d69210e9e53455978dd0194ca WatchSource:0}: Error finding container d606d103cebf7bcbca16e7ccab5c3b3e5c880f0d69210e9e53455978dd0194ca: Status 404 returned error can't find the container with id d606d103cebf7bcbca16e7ccab5c3b3e5c880f0d69210e9e53455978dd0194ca Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.846166 4729 generic.go:334] "Generic (PLEG): container finished" podID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerID="ef9b148a107004cf9ab78f0217c439148797f5480e0ccb665d1baba4c974d00b" exitCode=0 Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.846257 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerDied","Data":"ef9b148a107004cf9ab78f0217c439148797f5480e0ccb665d1baba4c974d00b"} Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.846532 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerStarted","Data":"d606d103cebf7bcbca16e7ccab5c3b3e5c880f0d69210e9e53455978dd0194ca"} Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.847715 4729 generic.go:334] "Generic (PLEG): container finished" podID="f11add35-16a1-4182-92bb-55f9144ffe2a" containerID="3aa2d67e53b511df5ffc7fad90fb90ccc6e6456c6b5d259fa20cb84992589683" exitCode=0 Jan 27 14:12:38 crc kubenswrapper[4729]: I0127 14:12:38.847860 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlhr9" event={"ID":"f11add35-16a1-4182-92bb-55f9144ffe2a","Type":"ContainerDied","Data":"3aa2d67e53b511df5ffc7fad90fb90ccc6e6456c6b5d259fa20cb84992589683"} Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.516914 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.522751 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.524356 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.526460 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.556021 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.556430 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fktw\" (UniqueName: \"kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.556475 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.658028 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fktw\" (UniqueName: \"kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.658071 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.658133 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.658574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.658606 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.680183 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fktw\" (UniqueName: \"kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw\") pod \"community-operators-bqcfg\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.720765 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9nd4"] Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.722440 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.725968 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.730806 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9nd4"] Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.759505 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj24t\" (UniqueName: \"kubernetes.io/projected/fd6bbce9-632e-4493-9867-9859ee8a4aeb-kube-api-access-jj24t\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.759591 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-utilities\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.759642 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-catalog-content\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.847360 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.854821 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerStarted","Data":"959243de95f2a21a5785c873b33907e2a251f765dbe4de4cc6c4b92ad21e9bc4"} Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.864587 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj24t\" (UniqueName: \"kubernetes.io/projected/fd6bbce9-632e-4493-9867-9859ee8a4aeb-kube-api-access-jj24t\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.864655 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-utilities\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.864699 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-catalog-content\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.865133 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-catalog-content\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.865262 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6bbce9-632e-4493-9867-9859ee8a4aeb-utilities\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:39 crc kubenswrapper[4729]: I0127 14:12:39.883300 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj24t\" (UniqueName: \"kubernetes.io/projected/fd6bbce9-632e-4493-9867-9859ee8a4aeb-kube-api-access-jj24t\") pod \"redhat-operators-v9nd4\" (UID: \"fd6bbce9-632e-4493-9867-9859ee8a4aeb\") " pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.039902 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.247492 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:12:40 crc kubenswrapper[4729]: W0127 14:12:40.249494 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62a0134_758f_4404_b73c_77d7070bd4dd.slice/crio-f4b4c96916289a1689d4344ace461159c3e9f7f54cad780ed4fb78316ded5341 WatchSource:0}: Error finding container f4b4c96916289a1689d4344ace461159c3e9f7f54cad780ed4fb78316ded5341: Status 404 returned error can't find the container with id f4b4c96916289a1689d4344ace461159c3e9f7f54cad780ed4fb78316ded5341 Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.405957 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9nd4"] Jan 27 14:12:40 crc kubenswrapper[4729]: W0127 14:12:40.476943 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd6bbce9_632e_4493_9867_9859ee8a4aeb.slice/crio-199dcbc906ab48e3d4dcd1ec7e3a61208a030fec95905e74bd5bf02a46995b2e WatchSource:0}: Error finding container 199dcbc906ab48e3d4dcd1ec7e3a61208a030fec95905e74bd5bf02a46995b2e: Status 404 returned error can't find the container with id 199dcbc906ab48e3d4dcd1ec7e3a61208a030fec95905e74bd5bf02a46995b2e Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.861814 4729 generic.go:334] "Generic (PLEG): container finished" podID="f11add35-16a1-4182-92bb-55f9144ffe2a" containerID="f5ba1cd7289721b6256d4b389aa0adfd33aef888cbaff1f56f0220c4335aa614" exitCode=0 Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.861895 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlhr9" event={"ID":"f11add35-16a1-4182-92bb-55f9144ffe2a","Type":"ContainerDied","Data":"f5ba1cd7289721b6256d4b389aa0adfd33aef888cbaff1f56f0220c4335aa614"} Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.863965 4729 generic.go:334] "Generic (PLEG): container finished" podID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerID="d3be9105aee6a7412521dceddec1e2f1b8777a45c7a77c913240017f683713dc" exitCode=0 Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.864041 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerDied","Data":"d3be9105aee6a7412521dceddec1e2f1b8777a45c7a77c913240017f683713dc"} Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.864083 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerStarted","Data":"f4b4c96916289a1689d4344ace461159c3e9f7f54cad780ed4fb78316ded5341"} Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.866553 4729 generic.go:334] "Generic (PLEG): container finished" podID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerID="959243de95f2a21a5785c873b33907e2a251f765dbe4de4cc6c4b92ad21e9bc4" exitCode=0 Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.866612 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerDied","Data":"959243de95f2a21a5785c873b33907e2a251f765dbe4de4cc6c4b92ad21e9bc4"} Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.868560 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd6bbce9-632e-4493-9867-9859ee8a4aeb" containerID="e2999612e5aaae930a111726006510a5061ce6c76e24bc52a21f951e137b9768" exitCode=0 Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.868617 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9nd4" event={"ID":"fd6bbce9-632e-4493-9867-9859ee8a4aeb","Type":"ContainerDied","Data":"e2999612e5aaae930a111726006510a5061ce6c76e24bc52a21f951e137b9768"} Jan 27 14:12:40 crc kubenswrapper[4729]: I0127 14:12:40.868640 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9nd4" event={"ID":"fd6bbce9-632e-4493-9867-9859ee8a4aeb","Type":"ContainerStarted","Data":"199dcbc906ab48e3d4dcd1ec7e3a61208a030fec95905e74bd5bf02a46995b2e"} Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.875651 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlhr9" event={"ID":"f11add35-16a1-4182-92bb-55f9144ffe2a","Type":"ContainerStarted","Data":"b93206ae553c4b641b39f71a659d01ed51633d5b0d5d165bf1f39511a6b6661d"} Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.878137 4729 generic.go:334] "Generic (PLEG): container finished" podID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerID="879b46222d499f149e39baee77a92e254fc562bcfe1092289e66e223462c9627" exitCode=0 Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.878240 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerDied","Data":"879b46222d499f149e39baee77a92e254fc562bcfe1092289e66e223462c9627"} Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.880347 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerStarted","Data":"5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552"} Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.883642 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9nd4" event={"ID":"fd6bbce9-632e-4493-9867-9859ee8a4aeb","Type":"ContainerStarted","Data":"2f0fb8cd9de275e7f85a22e2e6f1947e73527c4c95a65407f8da37ad49a76166"} Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.897856 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlhr9" podStartSLOduration=2.183386287 podStartE2EDuration="4.897838833s" podCreationTimestamp="2026-01-27 14:12:37 +0000 UTC" firstStartedPulling="2026-01-27 14:12:38.852914461 +0000 UTC m=+445.437105465" lastFinishedPulling="2026-01-27 14:12:41.567367007 +0000 UTC m=+448.151558011" observedRunningTime="2026-01-27 14:12:41.894760892 +0000 UTC m=+448.478951896" watchObservedRunningTime="2026-01-27 14:12:41.897838833 +0000 UTC m=+448.482029837" Jan 27 14:12:41 crc kubenswrapper[4729]: I0127 14:12:41.942566 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mkj77" podStartSLOduration=2.506140879 podStartE2EDuration="4.942549305s" podCreationTimestamp="2026-01-27 14:12:37 +0000 UTC" firstStartedPulling="2026-01-27 14:12:38.849244174 +0000 UTC m=+445.433435168" lastFinishedPulling="2026-01-27 14:12:41.2856526 +0000 UTC m=+447.869843594" observedRunningTime="2026-01-27 14:12:41.940644385 +0000 UTC m=+448.524835399" watchObservedRunningTime="2026-01-27 14:12:41.942549305 +0000 UTC m=+448.526740309" Jan 27 14:12:42 crc kubenswrapper[4729]: I0127 14:12:42.891261 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerStarted","Data":"55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da"} Jan 27 14:12:42 crc kubenswrapper[4729]: I0127 14:12:42.893284 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd6bbce9-632e-4493-9867-9859ee8a4aeb" containerID="2f0fb8cd9de275e7f85a22e2e6f1947e73527c4c95a65407f8da37ad49a76166" exitCode=0 Jan 27 14:12:42 crc kubenswrapper[4729]: I0127 14:12:42.893729 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9nd4" event={"ID":"fd6bbce9-632e-4493-9867-9859ee8a4aeb","Type":"ContainerDied","Data":"2f0fb8cd9de275e7f85a22e2e6f1947e73527c4c95a65407f8da37ad49a76166"} Jan 27 14:12:42 crc kubenswrapper[4729]: I0127 14:12:42.935795 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bqcfg" podStartSLOduration=2.313929627 podStartE2EDuration="3.935777341s" podCreationTimestamp="2026-01-27 14:12:39 +0000 UTC" firstStartedPulling="2026-01-27 14:12:40.866058398 +0000 UTC m=+447.450249402" lastFinishedPulling="2026-01-27 14:12:42.487906112 +0000 UTC m=+449.072097116" observedRunningTime="2026-01-27 14:12:42.914992682 +0000 UTC m=+449.499183706" watchObservedRunningTime="2026-01-27 14:12:42.935777341 +0000 UTC m=+449.519968345" Jan 27 14:12:43 crc kubenswrapper[4729]: I0127 14:12:43.900342 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9nd4" event={"ID":"fd6bbce9-632e-4493-9867-9859ee8a4aeb","Type":"ContainerStarted","Data":"b7404609e25dba9e37d9613cb95bc7cc23b21eefe1df697c1e4039f29ab234e5"} Jan 27 14:12:43 crc kubenswrapper[4729]: I0127 14:12:43.916691 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9nd4" podStartSLOduration=2.285058744 podStartE2EDuration="4.9166723s" podCreationTimestamp="2026-01-27 14:12:39 +0000 UTC" firstStartedPulling="2026-01-27 14:12:40.870020163 +0000 UTC m=+447.454211167" lastFinishedPulling="2026-01-27 14:12:43.501633719 +0000 UTC m=+450.085824723" observedRunningTime="2026-01-27 14:12:43.9147416 +0000 UTC m=+450.498932614" watchObservedRunningTime="2026-01-27 14:12:43.9166723 +0000 UTC m=+450.500863314" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.449733 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.450297 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.488904 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.636208 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.636270 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.671006 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.952702 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlhr9" Jan 27 14:12:47 crc kubenswrapper[4729]: I0127 14:12:47.956280 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:12:49 crc kubenswrapper[4729]: I0127 14:12:49.848199 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:49 crc kubenswrapper[4729]: I0127 14:12:49.850231 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:49 crc kubenswrapper[4729]: I0127 14:12:49.887905 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:49 crc kubenswrapper[4729]: I0127 14:12:49.968867 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:12:50 crc kubenswrapper[4729]: I0127 14:12:50.040096 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:50 crc kubenswrapper[4729]: I0127 14:12:50.040155 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:50 crc kubenswrapper[4729]: I0127 14:12:50.080296 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:12:50 crc kubenswrapper[4729]: I0127 14:12:50.971478 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9nd4" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.245286 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f"] Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.246703 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.248750 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.248796 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.249456 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.253013 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.253636 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.255749 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f"] Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.373967 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/07e4a812-d4fd-4905-968f-809f51de9caa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.374022 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg26z\" (UniqueName: \"kubernetes.io/projected/07e4a812-d4fd-4905-968f-809f51de9caa-kube-api-access-xg26z\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.374046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/07e4a812-d4fd-4905-968f-809f51de9caa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.475438 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/07e4a812-d4fd-4905-968f-809f51de9caa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.475497 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg26z\" (UniqueName: \"kubernetes.io/projected/07e4a812-d4fd-4905-968f-809f51de9caa-kube-api-access-xg26z\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.475520 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/07e4a812-d4fd-4905-968f-809f51de9caa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.476592 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/07e4a812-d4fd-4905-968f-809f51de9caa-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.482846 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/07e4a812-d4fd-4905-968f-809f51de9caa-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.492851 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg26z\" (UniqueName: \"kubernetes.io/projected/07e4a812-d4fd-4905-968f-809f51de9caa-kube-api-access-xg26z\") pod \"cluster-monitoring-operator-6d5b84845-kz42f\" (UID: \"07e4a812-d4fd-4905-968f-809f51de9caa\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.566672 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.942863 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f"] Jan 27 14:13:06 crc kubenswrapper[4729]: I0127 14:13:06.950808 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:13:07 crc kubenswrapper[4729]: I0127 14:13:07.015701 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" event={"ID":"07e4a812-d4fd-4905-968f-809f51de9caa","Type":"ContainerStarted","Data":"16b18f594a9a0b9b9c45e6b8f3157e039f77a371c315dd89a6dcb7a606f42a44"} Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.026122 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" event={"ID":"07e4a812-d4fd-4905-968f-809f51de9caa","Type":"ContainerStarted","Data":"4e382515f86bf3cbab6126c7874b1b729f3835943d84cbab1456c2272e93447b"} Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.042272 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-kz42f" podStartSLOduration=1.298510546 podStartE2EDuration="3.042252046s" podCreationTimestamp="2026-01-27 14:13:06 +0000 UTC" firstStartedPulling="2026-01-27 14:13:06.950416121 +0000 UTC m=+473.534607165" lastFinishedPulling="2026-01-27 14:13:08.694157661 +0000 UTC m=+475.278348665" observedRunningTime="2026-01-27 14:13:09.040742844 +0000 UTC m=+475.624933848" watchObservedRunningTime="2026-01-27 14:13:09.042252046 +0000 UTC m=+475.626443050" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.275548 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h57fq"] Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.276712 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.298069 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h57fq"] Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.379593 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5"] Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.380198 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.382029 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-w7fwv" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.382204 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.390324 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5"] Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.411617 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c8l\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-kube-api-access-p8c8l\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.411978 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-trusted-ca\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412066 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb71a46-20e0-4bf8-8b0e-c32040385b31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412164 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-bound-sa-token\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412260 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-tls\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412493 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412566 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-certificates\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.412614 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb71a46-20e0-4bf8-8b0e-c32040385b31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.434773 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514505 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8c8l\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-kube-api-access-p8c8l\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514578 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-trusted-ca\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514603 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-bound-sa-token\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514623 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb71a46-20e0-4bf8-8b0e-c32040385b31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514646 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-tls\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514706 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-certificates\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514732 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb71a46-20e0-4bf8-8b0e-c32040385b31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.514754 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4ee6022-b5c2-4ec1-8b01-c63b538c3c13-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ww9b5\" (UID: \"a4ee6022-b5c2-4ec1-8b01-c63b538c3c13\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.515218 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7fb71a46-20e0-4bf8-8b0e-c32040385b31-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.516824 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-certificates\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.517850 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7fb71a46-20e0-4bf8-8b0e-c32040385b31-trusted-ca\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.520129 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-registry-tls\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.520348 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7fb71a46-20e0-4bf8-8b0e-c32040385b31-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.530072 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-bound-sa-token\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.531967 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8c8l\" (UniqueName: \"kubernetes.io/projected/7fb71a46-20e0-4bf8-8b0e-c32040385b31-kube-api-access-p8c8l\") pod \"image-registry-66df7c8f76-h57fq\" (UID: \"7fb71a46-20e0-4bf8-8b0e-c32040385b31\") " pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.590841 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.616202 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4ee6022-b5c2-4ec1-8b01-c63b538c3c13-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ww9b5\" (UID: \"a4ee6022-b5c2-4ec1-8b01-c63b538c3c13\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.620686 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/a4ee6022-b5c2-4ec1-8b01-c63b538c3c13-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-ww9b5\" (UID: \"a4ee6022-b5c2-4ec1-8b01-c63b538c3c13\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.694257 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:09 crc kubenswrapper[4729]: I0127 14:13:09.979220 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h57fq"] Jan 27 14:13:09 crc kubenswrapper[4729]: W0127 14:13:09.985435 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fb71a46_20e0_4bf8_8b0e_c32040385b31.slice/crio-db153a7ef0ea7df6c394775a3cfb2252cdcac1e074b081c95c8cfc0d5fdd8027 WatchSource:0}: Error finding container db153a7ef0ea7df6c394775a3cfb2252cdcac1e074b081c95c8cfc0d5fdd8027: Status 404 returned error can't find the container with id db153a7ef0ea7df6c394775a3cfb2252cdcac1e074b081c95c8cfc0d5fdd8027 Jan 27 14:13:10 crc kubenswrapper[4729]: I0127 14:13:10.033525 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" event={"ID":"7fb71a46-20e0-4bf8-8b0e-c32040385b31","Type":"ContainerStarted","Data":"db153a7ef0ea7df6c394775a3cfb2252cdcac1e074b081c95c8cfc0d5fdd8027"} Jan 27 14:13:10 crc kubenswrapper[4729]: I0127 14:13:10.066038 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5"] Jan 27 14:13:10 crc kubenswrapper[4729]: W0127 14:13:10.073370 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ee6022_b5c2_4ec1_8b01_c63b538c3c13.slice/crio-b362d3125c76b7f7fd8acd232386e89b263ce4a205d215a220a7c27728369f1f WatchSource:0}: Error finding container b362d3125c76b7f7fd8acd232386e89b263ce4a205d215a220a7c27728369f1f: Status 404 returned error can't find the container with id b362d3125c76b7f7fd8acd232386e89b263ce4a205d215a220a7c27728369f1f Jan 27 14:13:11 crc kubenswrapper[4729]: I0127 14:13:11.039103 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" event={"ID":"a4ee6022-b5c2-4ec1-8b01-c63b538c3c13","Type":"ContainerStarted","Data":"b362d3125c76b7f7fd8acd232386e89b263ce4a205d215a220a7c27728369f1f"} Jan 27 14:13:11 crc kubenswrapper[4729]: I0127 14:13:11.040459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" event={"ID":"7fb71a46-20e0-4bf8-8b0e-c32040385b31","Type":"ContainerStarted","Data":"fd561a170cb7446284e769a7ee9f48c1b14776567a34f8f86e5988240b970c8e"} Jan 27 14:13:11 crc kubenswrapper[4729]: I0127 14:13:11.040693 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:11 crc kubenswrapper[4729]: I0127 14:13:11.066867 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" podStartSLOduration=2.066847814 podStartE2EDuration="2.066847814s" podCreationTimestamp="2026-01-27 14:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:11.061294229 +0000 UTC m=+477.645485233" watchObservedRunningTime="2026-01-27 14:13:11.066847814 +0000 UTC m=+477.651038818" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.047538 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" event={"ID":"a4ee6022-b5c2-4ec1-8b01-c63b538c3c13","Type":"ContainerStarted","Data":"ac0067f0f837711a69cd94761405f5e65f75e4b71bda3ead0b7cfc1caf5ae209"} Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.048454 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.057298 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.066330 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" podStartSLOduration=1.944809388 podStartE2EDuration="3.0663081s" podCreationTimestamp="2026-01-27 14:13:09 +0000 UTC" firstStartedPulling="2026-01-27 14:13:10.075906146 +0000 UTC m=+476.660097150" lastFinishedPulling="2026-01-27 14:13:11.197404848 +0000 UTC m=+477.781595862" observedRunningTime="2026-01-27 14:13:12.064734366 +0000 UTC m=+478.648925380" watchObservedRunningTime="2026-01-27 14:13:12.0663081 +0000 UTC m=+478.650499114" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.441536 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jql28"] Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.442608 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.444799 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.445208 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-85mbl" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.446271 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.446532 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.452526 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jql28"] Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.550361 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.550421 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.550462 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c92dfe4b-2a65-452a-9041-af82eb9c0919-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.550573 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjc7m\" (UniqueName: \"kubernetes.io/projected/c92dfe4b-2a65-452a-9041-af82eb9c0919-kube-api-access-jjc7m\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.651564 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c92dfe4b-2a65-452a-9041-af82eb9c0919-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.651618 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjc7m\" (UniqueName: \"kubernetes.io/projected/c92dfe4b-2a65-452a-9041-af82eb9c0919-kube-api-access-jjc7m\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.651684 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.651727 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.653013 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c92dfe4b-2a65-452a-9041-af82eb9c0919-metrics-client-ca\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.657204 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.657311 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/c92dfe4b-2a65-452a-9041-af82eb9c0919-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.666553 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjc7m\" (UniqueName: \"kubernetes.io/projected/c92dfe4b-2a65-452a-9041-af82eb9c0919-kube-api-access-jjc7m\") pod \"prometheus-operator-db54df47d-jql28\" (UID: \"c92dfe4b-2a65-452a-9041-af82eb9c0919\") " pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:12 crc kubenswrapper[4729]: I0127 14:13:12.758649 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" Jan 27 14:13:13 crc kubenswrapper[4729]: I0127 14:13:13.140958 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-jql28"] Jan 27 14:13:13 crc kubenswrapper[4729]: W0127 14:13:13.147173 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc92dfe4b_2a65_452a_9041_af82eb9c0919.slice/crio-3b96602ae871d9b2099a08334043db27b5423e429ec9784f695282e78cd520e0 WatchSource:0}: Error finding container 3b96602ae871d9b2099a08334043db27b5423e429ec9784f695282e78cd520e0: Status 404 returned error can't find the container with id 3b96602ae871d9b2099a08334043db27b5423e429ec9784f695282e78cd520e0 Jan 27 14:13:14 crc kubenswrapper[4729]: I0127 14:13:14.062310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" event={"ID":"c92dfe4b-2a65-452a-9041-af82eb9c0919","Type":"ContainerStarted","Data":"3b96602ae871d9b2099a08334043db27b5423e429ec9784f695282e78cd520e0"} Jan 27 14:13:15 crc kubenswrapper[4729]: I0127 14:13:15.070620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" event={"ID":"c92dfe4b-2a65-452a-9041-af82eb9c0919","Type":"ContainerStarted","Data":"ae299c5955404ee431e5690d45f7feb6ed6c6a3b34fe47c94179477dd7522496"} Jan 27 14:13:15 crc kubenswrapper[4729]: I0127 14:13:15.070956 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" event={"ID":"c92dfe4b-2a65-452a-9041-af82eb9c0919","Type":"ContainerStarted","Data":"d356afc6b504909c3fe0adb3b84407aa1d2c2b03b24278a834ac02ffd17deae6"} Jan 27 14:13:15 crc kubenswrapper[4729]: I0127 14:13:15.090023 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-jql28" podStartSLOduration=1.5822593820000002 podStartE2EDuration="3.089989944s" podCreationTimestamp="2026-01-27 14:13:12 +0000 UTC" firstStartedPulling="2026-01-27 14:13:13.15048036 +0000 UTC m=+479.734671364" lastFinishedPulling="2026-01-27 14:13:14.658210922 +0000 UTC m=+481.242401926" observedRunningTime="2026-01-27 14:13:15.084973044 +0000 UTC m=+481.669164068" watchObservedRunningTime="2026-01-27 14:13:15.089989944 +0000 UTC m=+481.674180958" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.783503 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs"] Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.784521 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.789960 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-zbmdp" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.793530 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.798099 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.799384 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-9zs5t"] Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.800708 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.802021 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8"] Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.803102 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.805512 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs"] Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811592 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-metrics-client-ca\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811629 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-root\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811654 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfkct\" (UniqueName: \"kubernetes.io/projected/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-kube-api-access-rfkct\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811743 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-sys\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811763 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-wtmp\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811781 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811801 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811924 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811974 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.811999 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812019 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812074 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-textfile\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812092 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8c0fa51e-556b-4b42-979f-b1969154c981-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812106 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dk5\" (UniqueName: \"kubernetes.io/projected/99e1b684-d4eb-4bcf-868b-445b80362857-kube-api-access-r6dk5\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812122 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbvvt\" (UniqueName: \"kubernetes.io/projected/8c0fa51e-556b-4b42-979f-b1969154c981-kube-api-access-kbvvt\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812141 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.812168 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99e1b684-d4eb-4bcf-868b-445b80362857-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.825521 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8"] Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.826251 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.826406 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.828794 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.829049 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-6r5d8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.829199 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.829340 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.829586 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-q2d4r" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914018 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-metrics-client-ca\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914064 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-root\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfkct\" (UniqueName: \"kubernetes.io/projected/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-kube-api-access-rfkct\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914134 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-sys\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914155 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-wtmp\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914175 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914196 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914223 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914251 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914277 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914298 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914356 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-textfile\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8c0fa51e-556b-4b42-979f-b1969154c981-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914409 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6dk5\" (UniqueName: \"kubernetes.io/projected/99e1b684-d4eb-4bcf-868b-445b80362857-kube-api-access-r6dk5\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914431 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbvvt\" (UniqueName: \"kubernetes.io/projected/8c0fa51e-556b-4b42-979f-b1969154c981-kube-api-access-kbvvt\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914453 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.914483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99e1b684-d4eb-4bcf-868b-445b80362857-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.915385 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/99e1b684-d4eb-4bcf-868b-445b80362857-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.915990 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-metrics-client-ca\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.918128 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-wtmp\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.918222 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-root\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.918834 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-sys\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.922202 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.931623 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.934032 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8c0fa51e-556b-4b42-979f-b1969154c981-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: E0127 14:13:16.934074 4729 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Jan 27 14:13:16 crc kubenswrapper[4729]: E0127 14:13:16.934156 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls podName:1da62a2b-5dfe-4cd7-9443-86f494fc0d7f nodeName:}" failed. No retries permitted until 2026-01-27 14:13:17.434136715 +0000 UTC m=+484.018327719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls") pod "node-exporter-9zs5t" (UID: "1da62a2b-5dfe-4cd7-9443-86f494fc0d7f") : secret "node-exporter-tls" not found Jan 27 14:13:16 crc kubenswrapper[4729]: E0127 14:13:16.934333 4729 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.934353 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-textfile\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: E0127 14:13:16.934368 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls podName:8c0fa51e-556b-4b42-979f-b1969154c981 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:17.434360841 +0000 UTC m=+484.018551845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-xkng8" (UID: "8c0fa51e-556b-4b42-979f-b1969154c981") : secret "kube-state-metrics-tls" not found Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.935106 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c0fa51e-556b-4b42-979f-b1969154c981-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.936926 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/99e1b684-d4eb-4bcf-868b-445b80362857-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.943474 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.944518 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.963673 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbvvt\" (UniqueName: \"kubernetes.io/projected/8c0fa51e-556b-4b42-979f-b1969154c981-kube-api-access-kbvvt\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.967050 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfkct\" (UniqueName: \"kubernetes.io/projected/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-kube-api-access-rfkct\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:16 crc kubenswrapper[4729]: I0127 14:13:16.972078 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6dk5\" (UniqueName: \"kubernetes.io/projected/99e1b684-d4eb-4bcf-868b-445b80362857-kube-api-access-r6dk5\") pod \"openshift-state-metrics-566fddb674-ktbcs\" (UID: \"99e1b684-d4eb-4bcf-868b-445b80362857\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.101010 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.488137 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs"] Jan 27 14:13:17 crc kubenswrapper[4729]: W0127 14:13:17.494106 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99e1b684_d4eb_4bcf_868b_445b80362857.slice/crio-6330c2ae00c0e3d36ad3c811e66811994db03d5fbe506f26d6af7c91fabea305 WatchSource:0}: Error finding container 6330c2ae00c0e3d36ad3c811e66811994db03d5fbe506f26d6af7c91fabea305: Status 404 returned error can't find the container with id 6330c2ae00c0e3d36ad3c811e66811994db03d5fbe506f26d6af7c91fabea305 Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.523841 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.523971 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.530010 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8c0fa51e-556b-4b42-979f-b1969154c981-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-xkng8\" (UID: \"8c0fa51e-556b-4b42-979f-b1969154c981\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.531404 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1da62a2b-5dfe-4cd7-9443-86f494fc0d7f-node-exporter-tls\") pod \"node-exporter-9zs5t\" (UID: \"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f\") " pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.722403 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-9zs5t" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.739947 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.872030 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.874487 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.876571 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.876724 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.878276 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.878386 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.878483 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.878577 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.878753 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.879601 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-95hjb" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.879698 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.904730 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.930657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.930692 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.930716 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.930886 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-volume\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.930967 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931013 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931076 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-web-config\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931104 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931123 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931154 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-out\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931183 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:17 crc kubenswrapper[4729]: I0127 14:13:17.931200 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdq2\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-kube-api-access-ktdq2\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032117 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032159 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdq2\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-kube-api-access-ktdq2\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032187 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032206 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.032378 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-volume\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033071 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033112 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033151 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-web-config\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033178 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033201 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033227 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-out\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033254 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033670 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.033724 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: E0127 14:13:18.033844 4729 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Jan 27 14:13:18 crc kubenswrapper[4729]: E0127 14:13:18.033931 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls podName:b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8 nodeName:}" failed. No retries permitted until 2026-01-27 14:13:18.53390826 +0000 UTC m=+485.118099264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8") : secret "alertmanager-main-tls" not found Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.038263 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.038851 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.039311 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-out\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.039430 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.039674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-web-config\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.039973 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.040749 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-config-volume\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.054405 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdq2\" (UniqueName: \"kubernetes.io/projected/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-kube-api-access-ktdq2\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.088770 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" event={"ID":"99e1b684-d4eb-4bcf-868b-445b80362857","Type":"ContainerStarted","Data":"1cf6fd3b2cb1f327bc8011efe32fb436fb819d26acd327676c3dfe884192938b"} Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.088807 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" event={"ID":"99e1b684-d4eb-4bcf-868b-445b80362857","Type":"ContainerStarted","Data":"0661b24ce0cdb8457b4180fd579cef7f6594d5b26f53b2aa487d7542a16c3917"} Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.088816 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" event={"ID":"99e1b684-d4eb-4bcf-868b-445b80362857","Type":"ContainerStarted","Data":"6330c2ae00c0e3d36ad3c811e66811994db03d5fbe506f26d6af7c91fabea305"} Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.090024 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9zs5t" event={"ID":"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f","Type":"ContainerStarted","Data":"99c8fbf53a0e373961b017ffa0ccbfaefb961657c3fd7692cc3d64b85d8f67a7"} Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.191658 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8"] Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.541241 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.548258 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8\") " pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.771172 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t"] Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.773342 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.776762 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.776850 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.776849 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.777011 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-6jgrn2fec4adj" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.777051 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.776770 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-ldjmk" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.777152 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.787018 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t"] Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.794331 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845388 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a1cdda-3322-4c35-a777-37537ff94bbf-metrics-client-ca\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845435 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845464 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845487 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845596 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845650 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845689 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5cj\" (UniqueName: \"kubernetes.io/projected/79a1cdda-3322-4c35-a777-37537ff94bbf-kube-api-access-vp5cj\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.845720 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-grpc-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946629 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a1cdda-3322-4c35-a777-37537ff94bbf-metrics-client-ca\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946859 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946903 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946923 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946949 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946973 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.946997 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5cj\" (UniqueName: \"kubernetes.io/projected/79a1cdda-3322-4c35-a777-37537ff94bbf-kube-api-access-vp5cj\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.947020 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-grpc-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.948131 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/79a1cdda-3322-4c35-a777-37537ff94bbf-metrics-client-ca\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.950467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.950467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.950766 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.952092 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.953065 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.954746 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/79a1cdda-3322-4c35-a777-37537ff94bbf-secret-grpc-tls\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:18 crc kubenswrapper[4729]: I0127 14:13:18.963488 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5cj\" (UniqueName: \"kubernetes.io/projected/79a1cdda-3322-4c35-a777-37537ff94bbf-kube-api-access-vp5cj\") pod \"thanos-querier-84c8cf7f6c-vps9t\" (UID: \"79a1cdda-3322-4c35-a777-37537ff94bbf\") " pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.089077 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.095793 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" event={"ID":"8c0fa51e-556b-4b42-979f-b1969154c981","Type":"ContainerStarted","Data":"385f6f87916aff8d3f9215086d6856b7ed5c847424ce2798cdeb2293eb57126f"} Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.097511 4729 generic.go:334] "Generic (PLEG): container finished" podID="1da62a2b-5dfe-4cd7-9443-86f494fc0d7f" containerID="90d65735dc2093efe140b9d7ae148a73c8a757927ea155d4ea761a596ef0cec2" exitCode=0 Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.097546 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9zs5t" event={"ID":"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f","Type":"ContainerDied","Data":"90d65735dc2093efe140b9d7ae148a73c8a757927ea155d4ea761a596ef0cec2"} Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.517840 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 27 14:13:19 crc kubenswrapper[4729]: W0127 14:13:19.524510 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9b7fe78_cd34_47ae_8bb3_62f0c8bf7bc8.slice/crio-9b41d32b1f655a19cbd6b53468a8b503485a163647118aae78dcfaeb2a53bf26 WatchSource:0}: Error finding container 9b41d32b1f655a19cbd6b53468a8b503485a163647118aae78dcfaeb2a53bf26: Status 404 returned error can't find the container with id 9b41d32b1f655a19cbd6b53468a8b503485a163647118aae78dcfaeb2a53bf26 Jan 27 14:13:19 crc kubenswrapper[4729]: I0127 14:13:19.576084 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t"] Jan 27 14:13:19 crc kubenswrapper[4729]: W0127 14:13:19.584003 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79a1cdda_3322_4c35_a777_37537ff94bbf.slice/crio-7d2db1f0893ac52a3a6c7781a6665f68ac4efdd5cb1684fec6e42560c0bff66c WatchSource:0}: Error finding container 7d2db1f0893ac52a3a6c7781a6665f68ac4efdd5cb1684fec6e42560c0bff66c: Status 404 returned error can't find the container with id 7d2db1f0893ac52a3a6c7781a6665f68ac4efdd5cb1684fec6e42560c0bff66c Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.105224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9zs5t" event={"ID":"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f","Type":"ContainerStarted","Data":"45668d50950d613b013dd360816a0dfab2b146a20033c8b2e4a2a58e0399bee5"} Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.107186 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-9zs5t" event={"ID":"1da62a2b-5dfe-4cd7-9443-86f494fc0d7f","Type":"ContainerStarted","Data":"5540d94aeb24fbe4e9f6443e2b944925f2967c8b09eb74e956ef27bd6b78b298"} Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.109071 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" event={"ID":"99e1b684-d4eb-4bcf-868b-445b80362857","Type":"ContainerStarted","Data":"a2871f8bae02fd2c02193af2557146be53071c5863c393086b540b6c5866c250"} Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.109997 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"7d2db1f0893ac52a3a6c7781a6665f68ac4efdd5cb1684fec6e42560c0bff66c"} Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.110854 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"9b41d32b1f655a19cbd6b53468a8b503485a163647118aae78dcfaeb2a53bf26"} Jan 27 14:13:20 crc kubenswrapper[4729]: I0127 14:13:20.126244 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-9zs5t" podStartSLOduration=3.188060653 podStartE2EDuration="4.126223148s" podCreationTimestamp="2026-01-27 14:13:16 +0000 UTC" firstStartedPulling="2026-01-27 14:13:17.745743467 +0000 UTC m=+484.329934471" lastFinishedPulling="2026-01-27 14:13:18.683905952 +0000 UTC m=+485.268096966" observedRunningTime="2026-01-27 14:13:20.123648447 +0000 UTC m=+486.707839471" watchObservedRunningTime="2026-01-27 14:13:20.126223148 +0000 UTC m=+486.710414152" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.118469 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" event={"ID":"8c0fa51e-556b-4b42-979f-b1969154c981","Type":"ContainerStarted","Data":"01c927bec91ccbbeb155816f53844809f281b13523791ae9ab7cc5e311684d2b"} Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.118821 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" event={"ID":"8c0fa51e-556b-4b42-979f-b1969154c981","Type":"ContainerStarted","Data":"60d21b92d2f22f5fe1d05bab436a82bc8e174f7e311154e5d3c6c622b94d5dc3"} Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.118847 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" event={"ID":"8c0fa51e-556b-4b42-979f-b1969154c981","Type":"ContainerStarted","Data":"bb5f433ee377b9358c322aafcedf054bc16cf08ada35682abe330ab57dabd4ab"} Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.138012 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-ktbcs" podStartSLOduration=3.849009101 podStartE2EDuration="5.137995198s" podCreationTimestamp="2026-01-27 14:13:16 +0000 UTC" firstStartedPulling="2026-01-27 14:13:17.902716568 +0000 UTC m=+484.486907572" lastFinishedPulling="2026-01-27 14:13:19.191702665 +0000 UTC m=+485.775893669" observedRunningTime="2026-01-27 14:13:20.139449208 +0000 UTC m=+486.723640222" watchObservedRunningTime="2026-01-27 14:13:21.137995198 +0000 UTC m=+487.722186202" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.139074 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" podStartSLOduration=3.2615284940000002 podStartE2EDuration="5.139068238s" podCreationTimestamp="2026-01-27 14:13:16 +0000 UTC" firstStartedPulling="2026-01-27 14:13:18.199664306 +0000 UTC m=+484.783855310" lastFinishedPulling="2026-01-27 14:13:20.07720405 +0000 UTC m=+486.661395054" observedRunningTime="2026-01-27 14:13:21.136809974 +0000 UTC m=+487.721000978" watchObservedRunningTime="2026-01-27 14:13:21.139068238 +0000 UTC m=+487.723259242" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.626813 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.627552 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.644135 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695633 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695711 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695762 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695813 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695829 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdzh\" (UniqueName: \"kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.695848 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796678 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796709 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796747 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796772 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdzh\" (UniqueName: \"kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796799 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.796822 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.797648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.797780 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.797784 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.798228 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.802374 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.803917 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.816955 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdzh\" (UniqueName: \"kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh\") pod \"console-94b79ccc9-z4zb9\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:21 crc kubenswrapper[4729]: I0127 14:13:21.949436 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.193479 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-85fb57964d-552jf"] Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.194475 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.198725 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dvkdl" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.198757 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.198804 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.199091 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-cdtagp5sad3ka" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.199196 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.199362 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.207261 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85fb57964d-552jf"] Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304177 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-client-certs\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304240 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304278 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-server-tls\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ktj\" (UniqueName: \"kubernetes.io/projected/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-kube-api-access-x8ktj\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304324 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-audit-log\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304438 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-client-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.304617 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-metrics-server-audit-profiles\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.405894 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-client-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.405967 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-metrics-server-audit-profiles\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406007 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-client-certs\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406031 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406057 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ktj\" (UniqueName: \"kubernetes.io/projected/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-kube-api-access-x8ktj\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406078 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-server-tls\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406100 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-audit-log\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.406704 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-audit-log\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.407032 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.407390 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-metrics-server-audit-profiles\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.410515 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-client-certs\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.410599 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-client-ca-bundle\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.410909 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-secret-metrics-server-tls\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.421381 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ktj\" (UniqueName: \"kubernetes.io/projected/a04e23f6-2034-4b06-9772-3b8ae9a3afa0-kube-api-access-x8ktj\") pod \"metrics-server-85fb57964d-552jf\" (UID: \"a04e23f6-2034-4b06-9772-3b8ae9a3afa0\") " pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.520010 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.597680 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq"] Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.599932 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.606220 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq"] Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.609991 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.610207 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.709598 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5305486-7e58-4a91-b374-1ad6e00a9709-monitoring-plugin-cert\") pod \"monitoring-plugin-67cc6b99d8-4zqkq\" (UID: \"c5305486-7e58-4a91-b374-1ad6e00a9709\") " pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.810889 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5305486-7e58-4a91-b374-1ad6e00a9709-monitoring-plugin-cert\") pod \"monitoring-plugin-67cc6b99d8-4zqkq\" (UID: \"c5305486-7e58-4a91-b374-1ad6e00a9709\") " pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.817236 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c5305486-7e58-4a91-b374-1ad6e00a9709-monitoring-plugin-cert\") pod \"monitoring-plugin-67cc6b99d8-4zqkq\" (UID: \"c5305486-7e58-4a91-b374-1ad6e00a9709\") " pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.890807 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:13:22 crc kubenswrapper[4729]: W0127 14:13:22.905793 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a9d143_82ff_449f_885d_37d7c2769bea.slice/crio-dcda14156e6c75ecc10258ba3b1fc5783c20c122ef0276b0843d172ea641c80c WatchSource:0}: Error finding container dcda14156e6c75ecc10258ba3b1fc5783c20c122ef0276b0843d172ea641c80c: Status 404 returned error can't find the container with id dcda14156e6c75ecc10258ba3b1fc5783c20c122ef0276b0843d172ea641c80c Jan 27 14:13:22 crc kubenswrapper[4729]: I0127 14:13:22.927604 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.049213 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-85fb57964d-552jf"] Jan 27 14:13:23 crc kubenswrapper[4729]: W0127 14:13:23.056261 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda04e23f6_2034_4b06_9772_3b8ae9a3afa0.slice/crio-96f2d5e2aa0f965b87188f84b13c9ee2d796fd9914832f36ab685b0f9498f6ec WatchSource:0}: Error finding container 96f2d5e2aa0f965b87188f84b13c9ee2d796fd9914832f36ab685b0f9498f6ec: Status 404 returned error can't find the container with id 96f2d5e2aa0f965b87188f84b13c9ee2d796fd9914832f36ab685b0f9498f6ec Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.142101 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.150821 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94b79ccc9-z4zb9" event={"ID":"69a9d143-82ff-449f-885d-37d7c2769bea","Type":"ContainerStarted","Data":"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.150890 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94b79ccc9-z4zb9" event={"ID":"69a9d143-82ff-449f-885d-37d7c2769bea","Type":"ContainerStarted","Data":"dcda14156e6c75ecc10258ba3b1fc5783c20c122ef0276b0843d172ea641c80c"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.150997 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.151337 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.155378 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.155830 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.156145 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.156310 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.156451 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.156556 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.157547 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.157612 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-cu7h6l03c8720" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.157705 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.158208 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.158290 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bcn9k" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.158701 4729 generic.go:334] "Generic (PLEG): container finished" podID="b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8" containerID="d019581c11eac43536720339b255bdfd63655b82f6e673cb2e87a437d720c53d" exitCode=0 Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.159685 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerDied","Data":"d019581c11eac43536720339b255bdfd63655b82f6e673cb2e87a437d720c53d"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.161755 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.163630 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"e2a36c4389b8fd4560cffefdd57e7849c8a8e4dd2d9f27672de69015a8233ca5"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.163668 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"e391f5377bb1ba379346b771fc946f13461d1cd6e717c920587fbd0ffef318a9"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.163678 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"cc45531a783c7b4c5fb6d43267026291c2257dbe98ffcd66cf1a98c3437cc533"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.166688 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" event={"ID":"a04e23f6-2034-4b06-9772-3b8ae9a3afa0","Type":"ContainerStarted","Data":"96f2d5e2aa0f965b87188f84b13c9ee2d796fd9914832f36ab685b0f9498f6ec"} Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.168797 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.171511 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-94b79ccc9-z4zb9" podStartSLOduration=2.171496904 podStartE2EDuration="2.171496904s" podCreationTimestamp="2026-01-27 14:13:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:13:23.170647171 +0000 UTC m=+489.754838185" watchObservedRunningTime="2026-01-27 14:13:23.171496904 +0000 UTC m=+489.755687908" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.214935 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215021 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215119 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-web-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215143 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215204 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215229 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215251 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215274 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config-out\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215329 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215391 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.215416 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.216641 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.216777 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.216866 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.216941 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgbtk\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-kube-api-access-dgbtk\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.216966 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.217001 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.217110 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.318180 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config-out\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.318522 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.318667 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.318782 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.318919 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319025 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319111 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319209 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgbtk\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-kube-api-access-dgbtk\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319298 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319572 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319635 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319864 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-web-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.319967 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.320066 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.320208 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.320301 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.320391 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.323815 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.324031 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.324116 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.324144 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.324980 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.325264 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.325738 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.326472 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.326860 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.327252 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.328456 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config-out\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.328801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.330420 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.330574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-web-config\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.331556 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq"] Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.334397 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: W0127 14:13:23.337471 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5305486_7e58_4a91_b374_1ad6e00a9709.slice/crio-125b88bff04936b24f2fac17d12e20da3fb0583a20f6310a18362d535d096e0a WatchSource:0}: Error finding container 125b88bff04936b24f2fac17d12e20da3fb0583a20f6310a18362d535d096e0a: Status 404 returned error can't find the container with id 125b88bff04936b24f2fac17d12e20da3fb0583a20f6310a18362d535d096e0a Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.337758 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgbtk\" (UniqueName: \"kubernetes.io/projected/e7ae189f-1129-4ec5-b609-9bd7eac4f1de-kube-api-access-dgbtk\") pod \"prometheus-k8s-0\" (UID: \"e7ae189f-1129-4ec5-b609-9bd7eac4f1de\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.476765 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:23 crc kubenswrapper[4729]: I0127 14:13:23.882322 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 27 14:13:23 crc kubenswrapper[4729]: W0127 14:13:23.968168 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ae189f_1129_4ec5_b609_9bd7eac4f1de.slice/crio-c46b9580ab8ce8613b50a2f75239e907f00368b8255f1f36c04c0d4b93342401 WatchSource:0}: Error finding container c46b9580ab8ce8613b50a2f75239e907f00368b8255f1f36c04c0d4b93342401: Status 404 returned error can't find the container with id c46b9580ab8ce8613b50a2f75239e907f00368b8255f1f36c04c0d4b93342401 Jan 27 14:13:24 crc kubenswrapper[4729]: I0127 14:13:24.175648 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"0125e444b17bbeeab300ab7e54a677c6e09e993fb39445e2dc9111b3eae0f293"} Jan 27 14:13:24 crc kubenswrapper[4729]: I0127 14:13:24.176391 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"c46b9580ab8ce8613b50a2f75239e907f00368b8255f1f36c04c0d4b93342401"} Jan 27 14:13:24 crc kubenswrapper[4729]: I0127 14:13:24.178994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" event={"ID":"c5305486-7e58-4a91-b374-1ad6e00a9709","Type":"ContainerStarted","Data":"125b88bff04936b24f2fac17d12e20da3fb0583a20f6310a18362d535d096e0a"} Jan 27 14:13:24 crc kubenswrapper[4729]: I0127 14:13:24.182925 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"e69da0e1e11271e94a4452ce4c517d48b4c606c0a7ef38de338fd90a088216c1"} Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.200977 4729 generic.go:334] "Generic (PLEG): container finished" podID="e7ae189f-1129-4ec5-b609-9bd7eac4f1de" containerID="0125e444b17bbeeab300ab7e54a677c6e09e993fb39445e2dc9111b3eae0f293" exitCode=0 Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.201040 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerDied","Data":"0125e444b17bbeeab300ab7e54a677c6e09e993fb39445e2dc9111b3eae0f293"} Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.206845 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"5b1c87355611a3b8e0d285f0e20ca1f79e04713eb59f74f316a55f2a435c6c90"} Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.206891 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" event={"ID":"79a1cdda-3322-4c35-a777-37537ff94bbf","Type":"ContainerStarted","Data":"a444d66d37e67f34145f0efa1af1fcfdd06aaa3cce7e31a97bf97467140381c2"} Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.207094 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:25 crc kubenswrapper[4729]: I0127 14:13:25.254336 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" podStartSLOduration=2.8112224770000003 podStartE2EDuration="7.254313147s" podCreationTimestamp="2026-01-27 14:13:18 +0000 UTC" firstStartedPulling="2026-01-27 14:13:19.586199406 +0000 UTC m=+486.170390410" lastFinishedPulling="2026-01-27 14:13:24.029290076 +0000 UTC m=+490.613481080" observedRunningTime="2026-01-27 14:13:25.250175392 +0000 UTC m=+491.834366436" watchObservedRunningTime="2026-01-27 14:13:25.254313147 +0000 UTC m=+491.838504191" Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.215568 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" event={"ID":"c5305486-7e58-4a91-b374-1ad6e00a9709","Type":"ContainerStarted","Data":"2443d7c9f3b4f50d8f20c94be71da56be846c58db53bc282720934bbc029dcb6"} Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.216198 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.217615 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" event={"ID":"a04e23f6-2034-4b06-9772-3b8ae9a3afa0","Type":"ContainerStarted","Data":"40aed23acf70f90d7d194a34a55a8de2889835882f696b52d94369c39ea3e5e6"} Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.220228 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"bfedae5e3c4ab2ac2725c63e68e06a410128739094700444faae63669d123d32"} Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.220267 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"94b1a050a59616ce4edfef8ac62a9e42889c1e483ebd6650c0a512593f18a6d2"} Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.221246 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.236250 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-67cc6b99d8-4zqkq" podStartSLOduration=1.765888854 podStartE2EDuration="4.236228693s" podCreationTimestamp="2026-01-27 14:13:22 +0000 UTC" firstStartedPulling="2026-01-27 14:13:23.339486243 +0000 UTC m=+489.923677247" lastFinishedPulling="2026-01-27 14:13:25.809826082 +0000 UTC m=+492.394017086" observedRunningTime="2026-01-27 14:13:26.232368086 +0000 UTC m=+492.816559090" watchObservedRunningTime="2026-01-27 14:13:26.236228693 +0000 UTC m=+492.820419697" Jan 27 14:13:26 crc kubenswrapper[4729]: I0127 14:13:26.282220 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" podStartSLOduration=1.5418703310000001 podStartE2EDuration="4.282199136s" podCreationTimestamp="2026-01-27 14:13:22 +0000 UTC" firstStartedPulling="2026-01-27 14:13:23.068303204 +0000 UTC m=+489.652494208" lastFinishedPulling="2026-01-27 14:13:25.808632009 +0000 UTC m=+492.392823013" observedRunningTime="2026-01-27 14:13:26.27554496 +0000 UTC m=+492.859735964" watchObservedRunningTime="2026-01-27 14:13:26.282199136 +0000 UTC m=+492.866390140" Jan 27 14:13:27 crc kubenswrapper[4729]: I0127 14:13:27.235145 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"ca20c9b0a01d3356d6d277378e812399326d46ab2a6e659d1b74951f93f03065"} Jan 27 14:13:27 crc kubenswrapper[4729]: I0127 14:13:27.235640 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"d24973381d0689947256a86580c633754efc450885dfebdb46371ef1c2d1c2e3"} Jan 27 14:13:27 crc kubenswrapper[4729]: I0127 14:13:27.235659 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"17586e366212a5fdf85264a2382020deddec8a9aaae99eeffa645f6c06177621"} Jan 27 14:13:27 crc kubenswrapper[4729]: I0127 14:13:27.235669 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"b9b7fe78-cd34-47ae-8bb3-62f0c8bf7bc8","Type":"ContainerStarted","Data":"7fc19a51e7bdf71ae5d5a3cf5eb09a578937593cdb34f2c2066d34e193a167e3"} Jan 27 14:13:27 crc kubenswrapper[4729]: I0127 14:13:27.266496 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.96143635 podStartE2EDuration="10.266476898s" podCreationTimestamp="2026-01-27 14:13:17 +0000 UTC" firstStartedPulling="2026-01-27 14:13:19.527277972 +0000 UTC m=+486.111468976" lastFinishedPulling="2026-01-27 14:13:25.8323185 +0000 UTC m=+492.416509524" observedRunningTime="2026-01-27 14:13:27.265139811 +0000 UTC m=+493.849330845" watchObservedRunningTime="2026-01-27 14:13:27.266476898 +0000 UTC m=+493.850667902" Jan 27 14:13:29 crc kubenswrapper[4729]: I0127 14:13:29.098807 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-84c8cf7f6c-vps9t" Jan 27 14:13:29 crc kubenswrapper[4729]: I0127 14:13:29.597657 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-h57fq" Jan 27 14:13:29 crc kubenswrapper[4729]: I0127 14:13:29.658503 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254370 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"b49e0eed411e43b12bcdf07bff1ea2163a4a3cf69ee98885c9c69f603d90ea8b"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254715 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"380c9d46cb87f40fd722aea332a9fde4c2f7d290844a02772bc1845f780be591"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254730 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"84c1c1e67ecc15446ab8b3497e4208c6a56d64b1f396ab36fa0c12216639ba72"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254740 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"560602be29df3b3cbe66ce36d912e698a91376fb1051ad3aa351937596a6cb26"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254750 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"f70b3e4d8136defa3c889f0f83f3b7e3ed12d9b50c05e1c2ff0189626c0bcb1b"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.254759 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"e7ae189f-1129-4ec5-b609-9bd7eac4f1de","Type":"ContainerStarted","Data":"641b21deb8734b3f0445bc78121b46917165dbccc99367ce2d04991b30fa393f"} Jan 27 14:13:30 crc kubenswrapper[4729]: I0127 14:13:30.290867 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.033618109 podStartE2EDuration="7.290849581s" podCreationTimestamp="2026-01-27 14:13:23 +0000 UTC" firstStartedPulling="2026-01-27 14:13:25.203274703 +0000 UTC m=+491.787465707" lastFinishedPulling="2026-01-27 14:13:29.460506175 +0000 UTC m=+496.044697179" observedRunningTime="2026-01-27 14:13:30.28473013 +0000 UTC m=+496.868921134" watchObservedRunningTime="2026-01-27 14:13:30.290849581 +0000 UTC m=+496.875040585" Jan 27 14:13:31 crc kubenswrapper[4729]: I0127 14:13:31.950150 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:31 crc kubenswrapper[4729]: I0127 14:13:31.950446 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:31 crc kubenswrapper[4729]: I0127 14:13:31.957557 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:32 crc kubenswrapper[4729]: I0127 14:13:32.274493 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:13:32 crc kubenswrapper[4729]: I0127 14:13:32.335800 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:13:33 crc kubenswrapper[4729]: I0127 14:13:33.478017 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:13:42 crc kubenswrapper[4729]: I0127 14:13:42.520788 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:42 crc kubenswrapper[4729]: I0127 14:13:42.521429 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:13:52 crc kubenswrapper[4729]: I0127 14:13:52.655515 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:13:52 crc kubenswrapper[4729]: I0127 14:13:52.656392 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:13:54 crc kubenswrapper[4729]: I0127 14:13:54.695212 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" podUID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" containerName="registry" containerID="cri-o://cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff" gracePeriod=30 Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.098545 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205097 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl2zr\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205408 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205475 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205569 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205639 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205784 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.205859 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls\") pod \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\" (UID: \"495678a7-f4e5-4ada-8da6-e4d573a2b7e0\") " Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.206375 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.207180 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.211789 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr" (OuterVolumeSpecName: "kube-api-access-vl2zr") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "kube-api-access-vl2zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.213159 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.213546 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.213648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.218531 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.220847 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "495678a7-f4e5-4ada-8da6-e4d573a2b7e0" (UID: "495678a7-f4e5-4ada-8da6-e4d573a2b7e0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307530 4729 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307564 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307577 4729 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307588 4729 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307602 4729 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307613 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl2zr\" (UniqueName: \"kubernetes.io/projected/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-kube-api-access-vl2zr\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.307624 4729 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495678a7-f4e5-4ada-8da6-e4d573a2b7e0-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.440515 4729 generic.go:334] "Generic (PLEG): container finished" podID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" containerID="cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff" exitCode=0 Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.440561 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" event={"ID":"495678a7-f4e5-4ada-8da6-e4d573a2b7e0","Type":"ContainerDied","Data":"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff"} Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.440573 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.440589 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ck87f" event={"ID":"495678a7-f4e5-4ada-8da6-e4d573a2b7e0","Type":"ContainerDied","Data":"93a50a9e6dd0102ee1ccc7bf2f6e4069147a8da3c645b40a7ed83aee820c2da8"} Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.440608 4729 scope.go:117] "RemoveContainer" containerID="cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.471047 4729 scope.go:117] "RemoveContainer" containerID="cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff" Jan 27 14:13:55 crc kubenswrapper[4729]: E0127 14:13:55.471806 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff\": container with ID starting with cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff not found: ID does not exist" containerID="cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.471860 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff"} err="failed to get container status \"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff\": rpc error: code = NotFound desc = could not find container \"cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff\": container with ID starting with cc58e5146454ea9e4524c826a9e059da0ae50b14fce82d3e31ba96ef8e5b26ff not found: ID does not exist" Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.477125 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:13:55 crc kubenswrapper[4729]: I0127 14:13:55.481799 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ck87f"] Jan 27 14:13:56 crc kubenswrapper[4729]: I0127 14:13:56.060107 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" path="/var/lib/kubelet/pods/495678a7-f4e5-4ada-8da6-e4d573a2b7e0/volumes" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.380069 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cg69z" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" containerID="cri-o://0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024" gracePeriod=15 Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.781006 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cg69z_8e60df4d-540b-489f-a297-46f35014add0/console/0.log" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.781338 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841668 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbmcc\" (UniqueName: \"kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841729 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841800 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841828 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841849 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841867 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.841916 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert\") pod \"8e60df4d-540b-489f-a297-46f35014add0\" (UID: \"8e60df4d-540b-489f-a297-46f35014add0\") " Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.842906 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca" (OuterVolumeSpecName: "service-ca") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.842930 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.842917 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config" (OuterVolumeSpecName: "console-config") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.843264 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.847697 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.848045 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.848951 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc" (OuterVolumeSpecName: "kube-api-access-jbmcc") pod "8e60df4d-540b-489f-a297-46f35014add0" (UID: "8e60df4d-540b-489f-a297-46f35014add0"). InnerVolumeSpecName "kube-api-access-jbmcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943758 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943796 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943807 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943816 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943825 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e60df4d-540b-489f-a297-46f35014add0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943833 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbmcc\" (UniqueName: \"kubernetes.io/projected/8e60df4d-540b-489f-a297-46f35014add0-kube-api-access-jbmcc\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:57 crc kubenswrapper[4729]: I0127 14:13:57.943841 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e60df4d-540b-489f-a297-46f35014add0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.461799 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cg69z_8e60df4d-540b-489f-a297-46f35014add0/console/0.log" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.462149 4729 generic.go:334] "Generic (PLEG): container finished" podID="8e60df4d-540b-489f-a297-46f35014add0" containerID="0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024" exitCode=2 Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.462210 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cg69z" event={"ID":"8e60df4d-540b-489f-a297-46f35014add0","Type":"ContainerDied","Data":"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024"} Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.462235 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cg69z" event={"ID":"8e60df4d-540b-489f-a297-46f35014add0","Type":"ContainerDied","Data":"a1d8ad70d27ebcd86cf61da706c5300662e4e86d872708b4a2dfa885a5f5554f"} Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.462252 4729 scope.go:117] "RemoveContainer" containerID="0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.462312 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cg69z" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.489538 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.489704 4729 scope.go:117] "RemoveContainer" containerID="0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024" Jan 27 14:13:58 crc kubenswrapper[4729]: E0127 14:13:58.490620 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024\": container with ID starting with 0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024 not found: ID does not exist" containerID="0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.490655 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024"} err="failed to get container status \"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024\": rpc error: code = NotFound desc = could not find container \"0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024\": container with ID starting with 0bae337a2852284c5a82b454d9e5b589f48a60aadca05b2664205d63db552024 not found: ID does not exist" Jan 27 14:13:58 crc kubenswrapper[4729]: I0127 14:13:58.511389 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cg69z"] Jan 27 14:14:00 crc kubenswrapper[4729]: I0127 14:14:00.066650 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e60df4d-540b-489f-a297-46f35014add0" path="/var/lib/kubelet/pods/8e60df4d-540b-489f-a297-46f35014add0/volumes" Jan 27 14:14:02 crc kubenswrapper[4729]: I0127 14:14:02.527342 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:14:02 crc kubenswrapper[4729]: I0127 14:14:02.533491 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" Jan 27 14:14:22 crc kubenswrapper[4729]: I0127 14:14:22.655338 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:14:22 crc kubenswrapper[4729]: I0127 14:14:22.656606 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:14:23 crc kubenswrapper[4729]: I0127 14:14:23.477181 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:14:23 crc kubenswrapper[4729]: I0127 14:14:23.504402 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:14:23 crc kubenswrapper[4729]: I0127 14:14:23.639993 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.655403 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.656038 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.656097 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.656704 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.656765 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253" gracePeriod=600 Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.782399 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253" exitCode=0 Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.782447 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253"} Jan 27 14:14:52 crc kubenswrapper[4729]: I0127 14:14:52.782526 4729 scope.go:117] "RemoveContainer" containerID="3d6ab415e1f8e163dce5bc940fd983790a5e9e14fc18252da80a42960d3d6ae0" Jan 27 14:14:53 crc kubenswrapper[4729]: I0127 14:14:53.790946 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b"} Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.653436 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:14:56 crc kubenswrapper[4729]: E0127 14:14:56.654439 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.654457 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" Jan 27 14:14:56 crc kubenswrapper[4729]: E0127 14:14:56.654490 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" containerName="registry" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.654499 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" containerName="registry" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.654630 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e60df4d-540b-489f-a297-46f35014add0" containerName="console" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.654652 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="495678a7-f4e5-4ada-8da6-e4d573a2b7e0" containerName="registry" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.655162 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.677158 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786328 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccd5\" (UniqueName: \"kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786382 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786407 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786472 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786521 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786575 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.786627 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.887950 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wccd5\" (UniqueName: \"kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.888016 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.888038 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.888091 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.888128 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889076 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889195 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889292 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889545 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.889797 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.894221 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.894305 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.905598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wccd5\" (UniqueName: \"kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5\") pod \"console-d975867cb-plmkl\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:56 crc kubenswrapper[4729]: I0127 14:14:56.984260 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:14:57 crc kubenswrapper[4729]: I0127 14:14:57.163185 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:14:57 crc kubenswrapper[4729]: I0127 14:14:57.814620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d975867cb-plmkl" event={"ID":"d8505b6a-3041-4b35-a246-82135f37a1bc","Type":"ContainerStarted","Data":"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74"} Jan 27 14:14:57 crc kubenswrapper[4729]: I0127 14:14:57.814962 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d975867cb-plmkl" event={"ID":"d8505b6a-3041-4b35-a246-82135f37a1bc","Type":"ContainerStarted","Data":"4db78b122c5e16ef25411ea952b919976572d8a83e6d0dca3ff56efb9d20409a"} Jan 27 14:14:57 crc kubenswrapper[4729]: I0127 14:14:57.845287 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d975867cb-plmkl" podStartSLOduration=1.845264075 podStartE2EDuration="1.845264075s" podCreationTimestamp="2026-01-27 14:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:14:57.841409577 +0000 UTC m=+584.425600581" watchObservedRunningTime="2026-01-27 14:14:57.845264075 +0000 UTC m=+584.429455079" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.172746 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt"] Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.174823 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.177652 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.177851 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.181972 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt"] Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.232303 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpt4p\" (UniqueName: \"kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.232412 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.232665 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.333600 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.333690 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpt4p\" (UniqueName: \"kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.333732 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.334756 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.340684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.354588 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpt4p\" (UniqueName: \"kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p\") pod \"collect-profiles-29492055-4fgmt\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.492210 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.703163 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt"] Jan 27 14:15:00 crc kubenswrapper[4729]: W0127 14:15:00.711632 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod797af56f_5ea4_435e_b09b_0b0901afb74e.slice/crio-8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36 WatchSource:0}: Error finding container 8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36: Status 404 returned error can't find the container with id 8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36 Jan 27 14:15:00 crc kubenswrapper[4729]: I0127 14:15:00.832513 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" event={"ID":"797af56f-5ea4-435e-b09b-0b0901afb74e","Type":"ContainerStarted","Data":"8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36"} Jan 27 14:15:01 crc kubenswrapper[4729]: I0127 14:15:01.843053 4729 generic.go:334] "Generic (PLEG): container finished" podID="797af56f-5ea4-435e-b09b-0b0901afb74e" containerID="a278762361dab3dec665068a6b7f5653067248dab716ef9eadbf97a7eae07fd6" exitCode=0 Jan 27 14:15:01 crc kubenswrapper[4729]: I0127 14:15:01.843150 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" event={"ID":"797af56f-5ea4-435e-b09b-0b0901afb74e","Type":"ContainerDied","Data":"a278762361dab3dec665068a6b7f5653067248dab716ef9eadbf97a7eae07fd6"} Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.068160 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.173768 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpt4p\" (UniqueName: \"kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p\") pod \"797af56f-5ea4-435e-b09b-0b0901afb74e\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.173918 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume\") pod \"797af56f-5ea4-435e-b09b-0b0901afb74e\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.174001 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume\") pod \"797af56f-5ea4-435e-b09b-0b0901afb74e\" (UID: \"797af56f-5ea4-435e-b09b-0b0901afb74e\") " Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.175212 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume" (OuterVolumeSpecName: "config-volume") pod "797af56f-5ea4-435e-b09b-0b0901afb74e" (UID: "797af56f-5ea4-435e-b09b-0b0901afb74e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.179326 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p" (OuterVolumeSpecName: "kube-api-access-mpt4p") pod "797af56f-5ea4-435e-b09b-0b0901afb74e" (UID: "797af56f-5ea4-435e-b09b-0b0901afb74e"). InnerVolumeSpecName "kube-api-access-mpt4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.180840 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "797af56f-5ea4-435e-b09b-0b0901afb74e" (UID: "797af56f-5ea4-435e-b09b-0b0901afb74e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.276045 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpt4p\" (UniqueName: \"kubernetes.io/projected/797af56f-5ea4-435e-b09b-0b0901afb74e-kube-api-access-mpt4p\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.276082 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/797af56f-5ea4-435e-b09b-0b0901afb74e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.276091 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/797af56f-5ea4-435e-b09b-0b0901afb74e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.858040 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" event={"ID":"797af56f-5ea4-435e-b09b-0b0901afb74e","Type":"ContainerDied","Data":"8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36"} Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.858471 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d90d68f4667da74f1750e8ccc5dbdb3b8982b93911a6ed59cfb06020ca2ea36" Jan 27 14:15:03 crc kubenswrapper[4729]: I0127 14:15:03.858151 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt" Jan 27 14:15:06 crc kubenswrapper[4729]: I0127 14:15:06.985202 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:15:06 crc kubenswrapper[4729]: I0127 14:15:06.985276 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:15:06 crc kubenswrapper[4729]: I0127 14:15:06.992419 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:15:07 crc kubenswrapper[4729]: I0127 14:15:07.884921 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:15:07 crc kubenswrapper[4729]: I0127 14:15:07.935524 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:15:32 crc kubenswrapper[4729]: I0127 14:15:32.985567 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-94b79ccc9-z4zb9" podUID="69a9d143-82ff-449f-885d-37d7c2769bea" containerName="console" containerID="cri-o://2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69" gracePeriod=15 Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.309612 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-94b79ccc9-z4zb9_69a9d143-82ff-449f-885d-37d7c2769bea/console/0.log" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.309908 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406046 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406116 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406135 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406156 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406175 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406222 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.406263 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qdzh\" (UniqueName: \"kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh\") pod \"69a9d143-82ff-449f-885d-37d7c2769bea\" (UID: \"69a9d143-82ff-449f-885d-37d7c2769bea\") " Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.407171 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.407211 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca" (OuterVolumeSpecName: "service-ca") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.407630 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config" (OuterVolumeSpecName: "console-config") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.407638 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.410500 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.410539 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh" (OuterVolumeSpecName: "kube-api-access-5qdzh") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "kube-api-access-5qdzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.410691 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "69a9d143-82ff-449f-885d-37d7c2769bea" (UID: "69a9d143-82ff-449f-885d-37d7c2769bea"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508151 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508207 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508228 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508247 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508265 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/69a9d143-82ff-449f-885d-37d7c2769bea-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508283 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/69a9d143-82ff-449f-885d-37d7c2769bea-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:33 crc kubenswrapper[4729]: I0127 14:15:33.508302 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qdzh\" (UniqueName: \"kubernetes.io/projected/69a9d143-82ff-449f-885d-37d7c2769bea-kube-api-access-5qdzh\") on node \"crc\" DevicePath \"\"" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.033296 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-94b79ccc9-z4zb9_69a9d143-82ff-449f-885d-37d7c2769bea/console/0.log" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.034024 4729 generic.go:334] "Generic (PLEG): container finished" podID="69a9d143-82ff-449f-885d-37d7c2769bea" containerID="2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69" exitCode=2 Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.034083 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94b79ccc9-z4zb9" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.034073 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94b79ccc9-z4zb9" event={"ID":"69a9d143-82ff-449f-885d-37d7c2769bea","Type":"ContainerDied","Data":"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69"} Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.034210 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94b79ccc9-z4zb9" event={"ID":"69a9d143-82ff-449f-885d-37d7c2769bea","Type":"ContainerDied","Data":"dcda14156e6c75ecc10258ba3b1fc5783c20c122ef0276b0843d172ea641c80c"} Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.034232 4729 scope.go:117] "RemoveContainer" containerID="2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.056606 4729 scope.go:117] "RemoveContainer" containerID="2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69" Jan 27 14:15:34 crc kubenswrapper[4729]: E0127 14:15:34.058678 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69\": container with ID starting with 2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69 not found: ID does not exist" containerID="2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.058739 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69"} err="failed to get container status \"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69\": rpc error: code = NotFound desc = could not find container \"2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69\": container with ID starting with 2aa0960f63bd3ff506dbf502d53fd68afe728fd0197b8f05fa9d646e67d51a69 not found: ID does not exist" Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.069172 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:15:34 crc kubenswrapper[4729]: I0127 14:15:34.072164 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-94b79ccc9-z4zb9"] Jan 27 14:15:36 crc kubenswrapper[4729]: I0127 14:15:36.055634 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a9d143-82ff-449f-885d-37d7c2769bea" path="/var/lib/kubelet/pods/69a9d143-82ff-449f-885d-37d7c2769bea/volumes" Jan 27 14:16:52 crc kubenswrapper[4729]: I0127 14:16:52.655281 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:16:52 crc kubenswrapper[4729]: I0127 14:16:52.655677 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:17:22 crc kubenswrapper[4729]: I0127 14:17:22.655480 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:17:22 crc kubenswrapper[4729]: I0127 14:17:22.656153 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:17:49 crc kubenswrapper[4729]: I0127 14:17:49.741428 4729 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.655771 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.656260 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.656328 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.657203 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.657298 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b" gracePeriod=600 Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.860154 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b" exitCode=0 Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.860223 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b"} Jan 27 14:17:52 crc kubenswrapper[4729]: I0127 14:17:52.860543 4729 scope.go:117] "RemoveContainer" containerID="4dd419eedd32250b5a6b51c6a6227f2b147584e939fb9591ec1e180b53cbd253" Jan 27 14:17:53 crc kubenswrapper[4729]: I0127 14:17:53.870915 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde"} Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.196141 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59"] Jan 27 14:19:00 crc kubenswrapper[4729]: E0127 14:19:00.196888 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797af56f-5ea4-435e-b09b-0b0901afb74e" containerName="collect-profiles" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.196902 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="797af56f-5ea4-435e-b09b-0b0901afb74e" containerName="collect-profiles" Jan 27 14:19:00 crc kubenswrapper[4729]: E0127 14:19:00.196923 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a9d143-82ff-449f-885d-37d7c2769bea" containerName="console" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.196929 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a9d143-82ff-449f-885d-37d7c2769bea" containerName="console" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.197042 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a9d143-82ff-449f-885d-37d7c2769bea" containerName="console" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.197068 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="797af56f-5ea4-435e-b09b-0b0901afb74e" containerName="collect-profiles" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.198078 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.200349 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.209033 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59"] Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.267046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b96p2\" (UniqueName: \"kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.267141 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.267335 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.368855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.369057 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b96p2\" (UniqueName: \"kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.369113 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.369740 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.369721 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.398475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b96p2\" (UniqueName: \"kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.518629 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:00 crc kubenswrapper[4729]: I0127 14:19:00.744531 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59"] Jan 27 14:19:01 crc kubenswrapper[4729]: I0127 14:19:01.288677 4729 generic.go:334] "Generic (PLEG): container finished" podID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerID="9fa104d3af3d16d0529704c61ac9bf8fd4d87e0be65358ed5ce3073432bf7161" exitCode=0 Jan 27 14:19:01 crc kubenswrapper[4729]: I0127 14:19:01.288728 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" event={"ID":"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c","Type":"ContainerDied","Data":"9fa104d3af3d16d0529704c61ac9bf8fd4d87e0be65358ed5ce3073432bf7161"} Jan 27 14:19:01 crc kubenswrapper[4729]: I0127 14:19:01.289027 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" event={"ID":"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c","Type":"ContainerStarted","Data":"bcfac3d28214557140d1aaaa887492ef60011a2539f4338470bf61b2e7ad049f"} Jan 27 14:19:01 crc kubenswrapper[4729]: I0127 14:19:01.290606 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.554559 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.556503 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.572699 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.603197 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.603324 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrvm\" (UniqueName: \"kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.603375 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.704932 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.705044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.705127 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrvm\" (UniqueName: \"kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.705670 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.705869 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.732679 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrvm\" (UniqueName: \"kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm\") pod \"redhat-operators-9hkhz\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:02 crc kubenswrapper[4729]: I0127 14:19:02.877974 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:03 crc kubenswrapper[4729]: I0127 14:19:03.086744 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:03 crc kubenswrapper[4729]: I0127 14:19:03.303904 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerStarted","Data":"0c8e45642850e74e18e50b0bb1db3ec8350e26ad467c4bb32cf58765a9377a18"} Jan 27 14:19:03 crc kubenswrapper[4729]: I0127 14:19:03.303954 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerStarted","Data":"2bbc7ddf93cd688d418ac7f7ef18b1af8ddf6711fcee6c693f5ed23913976886"} Jan 27 14:19:04 crc kubenswrapper[4729]: I0127 14:19:04.311556 4729 generic.go:334] "Generic (PLEG): container finished" podID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerID="0c8e45642850e74e18e50b0bb1db3ec8350e26ad467c4bb32cf58765a9377a18" exitCode=0 Jan 27 14:19:04 crc kubenswrapper[4729]: I0127 14:19:04.311602 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerDied","Data":"0c8e45642850e74e18e50b0bb1db3ec8350e26ad467c4bb32cf58765a9377a18"} Jan 27 14:19:06 crc kubenswrapper[4729]: I0127 14:19:06.323219 4729 generic.go:334] "Generic (PLEG): container finished" podID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerID="b06c7c67685caca2a1729c8046317abb0b26448092a32013068fdf1cc4c03140" exitCode=0 Jan 27 14:19:06 crc kubenswrapper[4729]: I0127 14:19:06.323311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerDied","Data":"b06c7c67685caca2a1729c8046317abb0b26448092a32013068fdf1cc4c03140"} Jan 27 14:19:07 crc kubenswrapper[4729]: I0127 14:19:07.333159 4729 generic.go:334] "Generic (PLEG): container finished" podID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerID="56e186807e31870040704c51c1bf49eb9a6163742bd4190493a79ef270897b28" exitCode=0 Jan 27 14:19:07 crc kubenswrapper[4729]: I0127 14:19:07.333213 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" event={"ID":"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c","Type":"ContainerDied","Data":"56e186807e31870040704c51c1bf49eb9a6163742bd4190493a79ef270897b28"} Jan 27 14:19:08 crc kubenswrapper[4729]: I0127 14:19:08.343523 4729 generic.go:334] "Generic (PLEG): container finished" podID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerID="6be4ac6c6cf7ed0942697a79c952052ee8dd871f8dbd1b97af56f7900af7cdde" exitCode=0 Jan 27 14:19:08 crc kubenswrapper[4729]: I0127 14:19:08.343639 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" event={"ID":"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c","Type":"ContainerDied","Data":"6be4ac6c6cf7ed0942697a79c952052ee8dd871f8dbd1b97af56f7900af7cdde"} Jan 27 14:19:08 crc kubenswrapper[4729]: I0127 14:19:08.346848 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerStarted","Data":"fde6be597d03838258bf417fc341b245400b300b30af1742c223fcf1411f61a2"} Jan 27 14:19:08 crc kubenswrapper[4729]: I0127 14:19:08.388934 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9hkhz" podStartSLOduration=3.279012396 podStartE2EDuration="6.388910875s" podCreationTimestamp="2026-01-27 14:19:02 +0000 UTC" firstStartedPulling="2026-01-27 14:19:04.314682159 +0000 UTC m=+830.898873163" lastFinishedPulling="2026-01-27 14:19:07.424580638 +0000 UTC m=+834.008771642" observedRunningTime="2026-01-27 14:19:08.384540774 +0000 UTC m=+834.968731778" watchObservedRunningTime="2026-01-27 14:19:08.388910875 +0000 UTC m=+834.973101909" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.589053 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.713356 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle\") pod \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.713448 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b96p2\" (UniqueName: \"kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2\") pod \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.713511 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util\") pod \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\" (UID: \"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c\") " Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.715598 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle" (OuterVolumeSpecName: "bundle") pod "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" (UID: "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.719978 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2" (OuterVolumeSpecName: "kube-api-access-b96p2") pod "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" (UID: "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c"). InnerVolumeSpecName "kube-api-access-b96p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.724075 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util" (OuterVolumeSpecName: "util") pod "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" (UID: "cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.815670 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.815746 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b96p2\" (UniqueName: \"kubernetes.io/projected/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-kube-api-access-b96p2\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:09 crc kubenswrapper[4729]: I0127 14:19:09.815769 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:10 crc kubenswrapper[4729]: I0127 14:19:10.361318 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" event={"ID":"cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c","Type":"ContainerDied","Data":"bcfac3d28214557140d1aaaa887492ef60011a2539f4338470bf61b2e7ad049f"} Jan 27 14:19:10 crc kubenswrapper[4729]: I0127 14:19:10.361375 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcfac3d28214557140d1aaaa887492ef60011a2539f4338470bf61b2e7ad049f" Jan 27 14:19:10 crc kubenswrapper[4729]: I0127 14:19:10.361426 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59" Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.115484 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9l5t6"] Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116068 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-controller" containerID="cri-o://4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116100 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="nbdb" containerID="cri-o://bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116395 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="sbdb" containerID="cri-o://9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116449 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="northd" containerID="cri-o://4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116501 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-node" containerID="cri-o://974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116509 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-acl-logging" containerID="cri-o://c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.116558 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41" gracePeriod=30 Jan 27 14:19:11 crc kubenswrapper[4729]: I0127 14:19:11.159750 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" containerID="cri-o://30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" gracePeriod=30 Jan 27 14:19:12 crc kubenswrapper[4729]: I0127 14:19:12.879669 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:12 crc kubenswrapper[4729]: I0127 14:19:12.879748 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:13 crc kubenswrapper[4729]: I0127 14:19:13.925468 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hkhz" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="registry-server" probeResult="failure" output=< Jan 27 14:19:13 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:19:13 crc kubenswrapper[4729]: > Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.111238 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 is running failed: container process not found" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.111297 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf is running failed: container process not found" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.112008 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 is running failed: container process not found" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.112844 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf is running failed: container process not found" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.112908 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 is running failed: container process not found" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.112950 4729 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="nbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.113407 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf is running failed: container process not found" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.113455 4729 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="sbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.899218 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/4.log" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.899783 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/3.log" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.901602 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovn-acl-logging/0.log" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.901970 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovn-controller/0.log" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.902426 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.965615 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7t59w"] Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.965958 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.965974 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.965987 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-node" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.965995 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-node" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966011 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="pull" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966019 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="pull" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966033 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-acl-logging" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966042 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-acl-logging" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966056 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="sbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966066 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="sbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966077 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966084 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966095 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966103 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966112 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="util" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966119 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="util" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966132 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="extract" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966149 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="extract" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966160 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966166 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966177 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966184 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966194 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="nbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966200 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="nbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966211 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966218 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966227 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966235 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966253 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kubecfg-setup" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966260 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kubecfg-setup" Jan 27 14:19:16 crc kubenswrapper[4729]: E0127 14:19:16.966269 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="northd" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966276 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="northd" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966407 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966419 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="sbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966433 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966441 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966452 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="nbdb" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966463 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966473 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c" containerName="extract" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966482 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966491 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovn-acl-logging" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966501 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-node" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966512 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="northd" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966524 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.966786 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerName="ovnkube-controller" Jan 27 14:19:16 crc kubenswrapper[4729]: I0127 14:19:16.969094 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016728 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016782 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016822 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016854 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v24tk\" (UniqueName: \"kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016931 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016921 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016949 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket" (OuterVolumeSpecName: "log-socket") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.016953 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017021 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017036 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017023 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017068 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017105 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017116 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017146 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017176 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017196 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017213 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017228 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017251 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017246 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log" (OuterVolumeSpecName: "node-log") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017266 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash" (OuterVolumeSpecName: "host-slash") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017324 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017272 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017372 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017388 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017418 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017456 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017477 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017441 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017519 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017521 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet\") pod \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\" (UID: \"e351d0ac-c092-4226-84d2-dbcea45c1ec0\") " Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017548 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017580 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017788 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-script-lib\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017815 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-slash\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017846 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-config\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017927 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-netd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017930 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.017954 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-kubelet\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018018 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns788\" (UniqueName: \"kubernetes.io/projected/8db3b171-aba8-4f09-b9f3-5769af614e30-kube-api-access-ns788\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018038 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-ovn\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018082 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-var-lib-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018114 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-netns\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018134 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-node-log\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018154 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-log-socket\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018174 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-env-overrides\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018199 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-etc-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018369 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018433 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018468 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018476 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-systemd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018554 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-bin\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-systemd-units\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018666 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8db3b171-aba8-4f09-b9f3-5769af614e30-ovn-node-metrics-cert\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018824 4729 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018840 4729 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018849 4729 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018859 4729 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018869 4729 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018898 4729 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018911 4729 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018921 4729 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018934 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018947 4729 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018960 4729 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018972 4729 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.018992 4729 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.019003 4729 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.019014 4729 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.019024 4729 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.019035 4729 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.028715 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk" (OuterVolumeSpecName: "kube-api-access-v24tk") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "kube-api-access-v24tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.028759 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.057405 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e351d0ac-c092-4226-84d2-dbcea45c1ec0" (UID: "e351d0ac-c092-4226-84d2-dbcea45c1ec0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-script-lib\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-slash\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121328 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-config\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121363 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-netd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121395 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-kubelet\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121436 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-slash\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121512 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-kubelet\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121540 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns788\" (UniqueName: \"kubernetes.io/projected/8db3b171-aba8-4f09-b9f3-5769af614e30-kube-api-access-ns788\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121571 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-ovn\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-netns\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121621 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-var-lib-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121644 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-node-log\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121658 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-log-socket\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121675 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-env-overrides\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-etc-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121766 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121807 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121850 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-systemd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121889 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121917 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-bin\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121949 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-systemd-units\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121977 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8db3b171-aba8-4f09-b9f3-5769af614e30-ovn-node-metrics-cert\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122102 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e351d0ac-c092-4226-84d2-dbcea45c1ec0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122121 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v24tk\" (UniqueName: \"kubernetes.io/projected/e351d0ac-c092-4226-84d2-dbcea45c1ec0-kube-api-access-v24tk\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122132 4729 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e351d0ac-c092-4226-84d2-dbcea45c1ec0-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122494 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-env-overrides\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-ovn\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.121573 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-netd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-systemd\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122626 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-var-lib-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122600 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-netns\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122653 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-etc-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122695 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-script-lib\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122724 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-run-openvswitch\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122703 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-log-socket\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122698 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-node-log\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122740 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122779 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-systemd-units\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122785 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-cni-bin\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122785 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8db3b171-aba8-4f09-b9f3-5769af614e30-host-run-ovn-kubernetes\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.122987 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8db3b171-aba8-4f09-b9f3-5769af614e30-ovnkube-config\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.126498 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8db3b171-aba8-4f09-b9f3-5769af614e30-ovn-node-metrics-cert\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.144491 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns788\" (UniqueName: \"kubernetes.io/projected/8db3b171-aba8-4f09-b9f3-5769af614e30-kube-api-access-ns788\") pod \"ovnkube-node-7t59w\" (UID: \"8db3b171-aba8-4f09-b9f3-5769af614e30\") " pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.229830 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/4.log" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.230552 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovnkube-controller/3.log" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233132 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovn-acl-logging/0.log" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233562 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9l5t6_e351d0ac-c092-4226-84d2-dbcea45c1ec0/ovn-controller/0.log" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233917 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" exitCode=2 Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233940 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" exitCode=143 Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233950 4729 generic.go:334] "Generic (PLEG): container finished" podID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" containerID="4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" exitCode=143 Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.233974 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8"} Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.234013 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925"} Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.234024 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2"} Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.234055 4729 scope.go:117] "RemoveContainer" containerID="30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.259817 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.288299 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.296130 4729 scope.go:117] "RemoveContainer" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.345077 4729 scope.go:117] "RemoveContainer" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.366779 4729 scope.go:117] "RemoveContainer" containerID="4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.404562 4729 scope.go:117] "RemoveContainer" containerID="50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.462870 4729 scope.go:117] "RemoveContainer" containerID="974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.495585 4729 scope.go:117] "RemoveContainer" containerID="c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.531528 4729 scope.go:117] "RemoveContainer" containerID="4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.570849 4729 scope.go:117] "RemoveContainer" containerID="de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.585716 4729 scope.go:117] "RemoveContainer" containerID="30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.586211 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8\": container with ID starting with 30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8 not found: ID does not exist" containerID="30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.586269 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8"} err="failed to get container status \"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8\": rpc error: code = NotFound desc = could not find container \"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8\": container with ID starting with 30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.586294 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.586726 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\": container with ID starting with 1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b not found: ID does not exist" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.586752 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b"} err="failed to get container status \"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\": rpc error: code = NotFound desc = could not find container \"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\": container with ID starting with 1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.586771 4729 scope.go:117] "RemoveContainer" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.587222 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\": container with ID starting with 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf not found: ID does not exist" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.587252 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf"} err="failed to get container status \"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\": rpc error: code = NotFound desc = could not find container \"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\": container with ID starting with 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.587274 4729 scope.go:117] "RemoveContainer" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.587570 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\": container with ID starting with bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 not found: ID does not exist" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.587595 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84"} err="failed to get container status \"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\": rpc error: code = NotFound desc = could not find container \"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\": container with ID starting with bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.587616 4729 scope.go:117] "RemoveContainer" containerID="4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.587950 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\": container with ID starting with 4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31 not found: ID does not exist" containerID="4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.587986 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31"} err="failed to get container status \"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\": rpc error: code = NotFound desc = could not find container \"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\": container with ID starting with 4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.588004 4729 scope.go:117] "RemoveContainer" containerID="50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.588524 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\": container with ID starting with 50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41 not found: ID does not exist" containerID="50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.588551 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41"} err="failed to get container status \"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\": rpc error: code = NotFound desc = could not find container \"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\": container with ID starting with 50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.588567 4729 scope.go:117] "RemoveContainer" containerID="974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.588934 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\": container with ID starting with 974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8 not found: ID does not exist" containerID="974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.588957 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8"} err="failed to get container status \"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\": rpc error: code = NotFound desc = could not find container \"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\": container with ID starting with 974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.588973 4729 scope.go:117] "RemoveContainer" containerID="c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.589308 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\": container with ID starting with c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925 not found: ID does not exist" containerID="c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.589333 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925"} err="failed to get container status \"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\": rpc error: code = NotFound desc = could not find container \"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\": container with ID starting with c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.589352 4729 scope.go:117] "RemoveContainer" containerID="4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.589752 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\": container with ID starting with 4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2 not found: ID does not exist" containerID="4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.589812 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2"} err="failed to get container status \"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\": rpc error: code = NotFound desc = could not find container \"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\": container with ID starting with 4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.589849 4729 scope.go:117] "RemoveContainer" containerID="de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc" Jan 27 14:19:17 crc kubenswrapper[4729]: E0127 14:19:17.590262 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\": container with ID starting with de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc not found: ID does not exist" containerID="de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590283 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc"} err="failed to get container status \"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\": rpc error: code = NotFound desc = could not find container \"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\": container with ID starting with de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590298 4729 scope.go:117] "RemoveContainer" containerID="30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590588 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8"} err="failed to get container status \"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8\": rpc error: code = NotFound desc = could not find container \"30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8\": container with ID starting with 30aeea00cb3b36e93b02d21830887e1b6a35556fd5677095ba1ad9374b6c79d8 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590616 4729 scope.go:117] "RemoveContainer" containerID="1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590905 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b"} err="failed to get container status \"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\": rpc error: code = NotFound desc = could not find container \"1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b\": container with ID starting with 1ddd81761437f62846de29cc7de9329df74cf260fa5fd02b745ab0f8e5d6626b not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.590932 4729 scope.go:117] "RemoveContainer" containerID="9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591227 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf"} err="failed to get container status \"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\": rpc error: code = NotFound desc = could not find container \"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf\": container with ID starting with 9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591245 4729 scope.go:117] "RemoveContainer" containerID="bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591441 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84"} err="failed to get container status \"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\": rpc error: code = NotFound desc = could not find container \"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84\": container with ID starting with bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591464 4729 scope.go:117] "RemoveContainer" containerID="4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591701 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31"} err="failed to get container status \"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\": rpc error: code = NotFound desc = could not find container \"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31\": container with ID starting with 4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591731 4729 scope.go:117] "RemoveContainer" containerID="50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591962 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41"} err="failed to get container status \"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\": rpc error: code = NotFound desc = could not find container \"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41\": container with ID starting with 50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.591992 4729 scope.go:117] "RemoveContainer" containerID="974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592253 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8"} err="failed to get container status \"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\": rpc error: code = NotFound desc = could not find container \"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8\": container with ID starting with 974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592279 4729 scope.go:117] "RemoveContainer" containerID="c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592511 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925"} err="failed to get container status \"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\": rpc error: code = NotFound desc = could not find container \"c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925\": container with ID starting with c75161ed5bf5b1dc8b82244fd18d0777ceaa864a067967c295fb288ec881e925 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592531 4729 scope.go:117] "RemoveContainer" containerID="4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592788 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2"} err="failed to get container status \"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\": rpc error: code = NotFound desc = could not find container \"4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2\": container with ID starting with 4b7fcd7771350745fe28aeed7be57a495fd465ef8c88168cff1b78f299daf6a2 not found: ID does not exist" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.592813 4729 scope.go:117] "RemoveContainer" containerID="de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc" Jan 27 14:19:17 crc kubenswrapper[4729]: I0127 14:19:17.593222 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc"} err="failed to get container status \"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\": rpc error: code = NotFound desc = could not find container \"de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc\": container with ID starting with de77d7cde9504d3b3213887112fdbffde8248289b96a55a38e5ce75cc72820cc not found: ID does not exist" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.240521 4729 generic.go:334] "Generic (PLEG): container finished" podID="8db3b171-aba8-4f09-b9f3-5769af614e30" containerID="0d8462f1af676b9779418d92507870c0e812c8f811f986e74e2cdd1b8ee41174" exitCode=0 Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.240621 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerDied","Data":"0d8462f1af676b9779418d92507870c0e812c8f811f986e74e2cdd1b8ee41174"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.240965 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"07cb44a7a5277b9256f7360e2c181317ab93205d001d17e3833be34e478e62d9"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243213 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243154 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"9aa45e1f01dbe20e33f0dac4ee37651bdf813f1bdb2549e54390a1fed5f70bcf"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243380 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"bffae45dd06ad64677695e61be76497b46503b415c25f898472e6eec97801f84"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243439 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"4e2d0afcc7f4e6d08bb185b48359d435f5497e6950415b7d6f5e876e486b0a31"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243516 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"50a7b19ccdbffcb0f41ababc496e648042fe1c088027f0635e8385fd3a074b41"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243570 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"974990748c99670de7fc30d29bc4f41896a0292c3582ae33527eb3e3e5922bd8"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.243762 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9l5t6" event={"ID":"e351d0ac-c092-4226-84d2-dbcea45c1ec0","Type":"ContainerDied","Data":"c18dc5ba54bfb24826d2d958d5d371d7f283f927858b308e511e56377891c73d"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.244760 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/2.log" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.245187 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/1.log" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.245224 4729 generic.go:334] "Generic (PLEG): container finished" podID="c96a4b30-dced-4bf8-8f46-348c1b8972b3" containerID="bdd8f0c91b4e8a3fe01d49899ee5fc45f9d4d8ff5debdfe48bad7e730d5ca470" exitCode=2 Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.245252 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerDied","Data":"bdd8f0c91b4e8a3fe01d49899ee5fc45f9d4d8ff5debdfe48bad7e730d5ca470"} Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.245279 4729 scope.go:117] "RemoveContainer" containerID="3a1e8f15f36fecdd5578377856a582814fb86558fea5c4b6bf231d1c861314be" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.245896 4729 scope.go:117] "RemoveContainer" containerID="bdd8f0c91b4e8a3fe01d49899ee5fc45f9d4d8ff5debdfe48bad7e730d5ca470" Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.444018 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9l5t6"] Jan 27 14:19:18 crc kubenswrapper[4729]: I0127 14:19:18.452662 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9l5t6"] Jan 27 14:19:19 crc kubenswrapper[4729]: I0127 14:19:19.254071 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ld6q8_c96a4b30-dced-4bf8-8f46-348c1b8972b3/kube-multus/2.log" Jan 27 14:19:19 crc kubenswrapper[4729]: I0127 14:19:19.254596 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ld6q8" event={"ID":"c96a4b30-dced-4bf8-8f46-348c1b8972b3","Type":"ContainerStarted","Data":"9996b5b7f15df707fe3f3ba1df2ba20f03a23fbb30632f920c724d621fb22c86"} Jan 27 14:19:19 crc kubenswrapper[4729]: I0127 14:19:19.257631 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"a3d200d34ae11e75e9aba7edbc3a07c39e5cee8f590d1a88dd88555cc6d9498d"} Jan 27 14:19:19 crc kubenswrapper[4729]: I0127 14:19:19.257663 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"854f535eb0e997597d8ce7d05fdb8bdd8cc496ee0290d611ac17b01aa954e455"} Jan 27 14:19:19 crc kubenswrapper[4729]: I0127 14:19:19.257675 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"66e32af3b3c9ed6ffa41cfa23c65b3c37fc9a0aec9281ff1e6ceeb2c7a9357f4"} Jan 27 14:19:20 crc kubenswrapper[4729]: I0127 14:19:20.081815 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e351d0ac-c092-4226-84d2-dbcea45c1ec0" path="/var/lib/kubelet/pods/e351d0ac-c092-4226-84d2-dbcea45c1ec0/volumes" Jan 27 14:19:20 crc kubenswrapper[4729]: I0127 14:19:20.266933 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"1bd0749cde6f4d1cd74ac0b937c6a96cb8adb74486e014f62f36511a2fa42b7e"} Jan 27 14:19:20 crc kubenswrapper[4729]: I0127 14:19:20.266988 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"241a86b17ae72720e1d21b60565d604993a64390bbae277dd4db3fe906bac568"} Jan 27 14:19:20 crc kubenswrapper[4729]: I0127 14:19:20.267004 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"e150b8749c811811268a12d5f0f3df9e9d224808ea994e2febfc2ca768ac8765"} Jan 27 14:19:22 crc kubenswrapper[4729]: I0127 14:19:22.919473 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:22 crc kubenswrapper[4729]: I0127 14:19:22.960570 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.150257 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.288213 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"70d033dabe186e4a9e996af794de66fa78349c499bc975cd3853549adcfc5dae"} Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.863052 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b"] Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.864477 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.869479 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.869760 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nhjvd" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.870248 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.919833 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6"] Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.922871 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.924415 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv"] Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.925969 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.926362 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.926385 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rw5c5" Jan 27 14:19:23 crc kubenswrapper[4729]: I0127 14:19:23.934132 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2h59\" (UniqueName: \"kubernetes.io/projected/e5a4281d-dad0-47ba-b48c-cb8a18c57552-kube-api-access-p2h59\") pod \"obo-prometheus-operator-68bc856cb9-6jn5b\" (UID: \"e5a4281d-dad0-47ba-b48c-cb8a18c57552\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.035926 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.036007 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.036059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2h59\" (UniqueName: \"kubernetes.io/projected/e5a4281d-dad0-47ba-b48c-cb8a18c57552-kube-api-access-p2h59\") pod \"obo-prometheus-operator-68bc856cb9-6jn5b\" (UID: \"e5a4281d-dad0-47ba-b48c-cb8a18c57552\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.036135 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.036177 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.058076 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2h59\" (UniqueName: \"kubernetes.io/projected/e5a4281d-dad0-47ba-b48c-cb8a18c57552-kube-api-access-p2h59\") pod \"obo-prometheus-operator-68bc856cb9-6jn5b\" (UID: \"e5a4281d-dad0-47ba-b48c-cb8a18c57552\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.078070 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gcmgr"] Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.078960 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.086359 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-lhhvw" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.086482 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.137830 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.137928 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qttb\" (UniqueName: \"kubernetes.io/projected/5b2e021c-d93d-45b1-81be-040aa9ab8ada-kube-api-access-6qttb\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.137975 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.138075 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.138673 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b2e021c-d93d-45b1-81be-040aa9ab8ada-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.138717 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.141407 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.146138 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.146138 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/009d21ee-b5c2-4d71-8a58-fc2643442532-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6\" (UID: \"009d21ee-b5c2-4d71-8a58-fc2643442532\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.146452 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64ad3df0-d3a7-446f-a7d9-6c4194d92071-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv\" (UID: \"64ad3df0-d3a7-446f-a7d9-6c4194d92071\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.173079 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p5mb2"] Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.173886 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.177283 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-8t662" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.194119 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.226320 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(b5dd50fe935bb2423db29ca5030740d71922c71eae7e76ed8d19558676cc52b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.226467 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(b5dd50fe935bb2423db29ca5030740d71922c71eae7e76ed8d19558676cc52b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.226544 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(b5dd50fe935bb2423db29ca5030740d71922c71eae7e76ed8d19558676cc52b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.226646 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators(e5a4281d-dad0-47ba-b48c-cb8a18c57552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators(e5a4281d-dad0-47ba-b48c-cb8a18c57552)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(b5dd50fe935bb2423db29ca5030740d71922c71eae7e76ed8d19558676cc52b5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" podUID="e5a4281d-dad0-47ba-b48c-cb8a18c57552" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.240342 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b2e021c-d93d-45b1-81be-040aa9ab8ada-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.240418 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2xs\" (UniqueName: \"kubernetes.io/projected/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-kube-api-access-5r2xs\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.240466 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qttb\" (UniqueName: \"kubernetes.io/projected/5b2e021c-d93d-45b1-81be-040aa9ab8ada-kube-api-access-6qttb\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.240491 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.244478 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/5b2e021c-d93d-45b1-81be-040aa9ab8ada-observability-operator-tls\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.247208 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.255427 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.258951 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qttb\" (UniqueName: \"kubernetes.io/projected/5b2e021c-d93d-45b1-81be-040aa9ab8ada-kube-api-access-6qttb\") pod \"observability-operator-59bdc8b94-gcmgr\" (UID: \"5b2e021c-d93d-45b1-81be-040aa9ab8ada\") " pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.271897 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(d1a9683cb5b01690b8fa95028148adc4c50a5b80cf9b8a299d90faebdb4cda45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.271982 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(d1a9683cb5b01690b8fa95028148adc4c50a5b80cf9b8a299d90faebdb4cda45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.272011 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(d1a9683cb5b01690b8fa95028148adc4c50a5b80cf9b8a299d90faebdb4cda45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.272075 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators(009d21ee-b5c2-4d71-8a58-fc2643442532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators(009d21ee-b5c2-4d71-8a58-fc2643442532)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(d1a9683cb5b01690b8fa95028148adc4c50a5b80cf9b8a299d90faebdb4cda45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" podUID="009d21ee-b5c2-4d71-8a58-fc2643442532" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.286150 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(ea4eb0e5232d78e0f9baa3660e0ab29037e81adcbba52f59d741d032931f74d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.286222 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(ea4eb0e5232d78e0f9baa3660e0ab29037e81adcbba52f59d741d032931f74d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.286243 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(ea4eb0e5232d78e0f9baa3660e0ab29037e81adcbba52f59d741d032931f74d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.286298 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators(64ad3df0-d3a7-446f-a7d9-6c4194d92071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators(64ad3df0-d3a7-446f-a7d9-6c4194d92071)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(ea4eb0e5232d78e0f9baa3660e0ab29037e81adcbba52f59d741d032931f74d5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" podUID="64ad3df0-d3a7-446f-a7d9-6c4194d92071" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.306146 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9hkhz" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="registry-server" containerID="cri-o://fde6be597d03838258bf417fc341b245400b300b30af1742c223fcf1411f61a2" gracePeriod=2 Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.341552 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2xs\" (UniqueName: \"kubernetes.io/projected/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-kube-api-access-5r2xs\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.341623 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.342466 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-openshift-service-ca\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.357768 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2xs\" (UniqueName: \"kubernetes.io/projected/a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31-kube-api-access-5r2xs\") pod \"perses-operator-5bf474d74f-p5mb2\" (UID: \"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31\") " pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.396949 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.420011 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(43e7c9fa0d241cfc69134a7c3ee8a21a7ac8396dbd3d3987f12f396e73826be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.420087 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(43e7c9fa0d241cfc69134a7c3ee8a21a7ac8396dbd3d3987f12f396e73826be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.420112 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(43e7c9fa0d241cfc69134a7c3ee8a21a7ac8396dbd3d3987f12f396e73826be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.420163 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-gcmgr_openshift-operators(5b2e021c-d93d-45b1-81be-040aa9ab8ada)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-gcmgr_openshift-operators(5b2e021c-d93d-45b1-81be-040aa9ab8ada)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(43e7c9fa0d241cfc69134a7c3ee8a21a7ac8396dbd3d3987f12f396e73826be1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" podUID="5b2e021c-d93d-45b1-81be-040aa9ab8ada" Jan 27 14:19:24 crc kubenswrapper[4729]: I0127 14:19:24.498156 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.519507 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(e64f6e26067dac7d9ea3459e64c800902f518d7fe06d34e5ae792e4b30c01845): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.519583 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(e64f6e26067dac7d9ea3459e64c800902f518d7fe06d34e5ae792e4b30c01845): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.519602 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(e64f6e26067dac7d9ea3459e64c800902f518d7fe06d34e5ae792e4b30c01845): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:24 crc kubenswrapper[4729]: E0127 14:19:24.519683 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-p5mb2_openshift-operators(a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-p5mb2_openshift-operators(a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(e64f6e26067dac7d9ea3459e64c800902f518d7fe06d34e5ae792e4b30c01845): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" podUID="a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.338955 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" event={"ID":"8db3b171-aba8-4f09-b9f3-5769af614e30","Type":"ContainerStarted","Data":"862199f7c46ebdd017b5500651dabd52991be1eb496ccae63350aaedf137668c"} Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.339528 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.347140 4729 generic.go:334] "Generic (PLEG): container finished" podID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerID="fde6be597d03838258bf417fc341b245400b300b30af1742c223fcf1411f61a2" exitCode=0 Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.347184 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerDied","Data":"fde6be597d03838258bf417fc341b245400b300b30af1742c223fcf1411f61a2"} Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.394289 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" podStartSLOduration=12.39426372 podStartE2EDuration="12.39426372s" podCreationTimestamp="2026-01-27 14:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:19:28.3931976 +0000 UTC m=+854.977388604" watchObservedRunningTime="2026-01-27 14:19:28.39426372 +0000 UTC m=+854.978454724" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.412729 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.424602 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.502646 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities\") pod \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.502922 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content\") pod \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.502947 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrvm\" (UniqueName: \"kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm\") pod \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\" (UID: \"9dfb6450-cf84-4fb9-a95d-00fef07961d4\") " Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.504659 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities" (OuterVolumeSpecName: "utilities") pod "9dfb6450-cf84-4fb9-a95d-00fef07961d4" (UID: "9dfb6450-cf84-4fb9-a95d-00fef07961d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.529363 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm" (OuterVolumeSpecName: "kube-api-access-mcrvm") pod "9dfb6450-cf84-4fb9-a95d-00fef07961d4" (UID: "9dfb6450-cf84-4fb9-a95d-00fef07961d4"). InnerVolumeSpecName "kube-api-access-mcrvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.604522 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrvm\" (UniqueName: \"kubernetes.io/projected/9dfb6450-cf84-4fb9-a95d-00fef07961d4-kube-api-access-mcrvm\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.604560 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.630345 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9dfb6450-cf84-4fb9-a95d-00fef07961d4" (UID: "9dfb6450-cf84-4fb9-a95d-00fef07961d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:19:28 crc kubenswrapper[4729]: I0127 14:19:28.706362 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dfb6450-cf84-4fb9-a95d-00fef07961d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.273971 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gcmgr"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.274105 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.274557 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.283126 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.283790 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.284416 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.302213 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.302348 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.302940 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.312394 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p5mb2"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.312577 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.313072 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.345704 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.345841 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.346372 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.350852 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(8a8b85b84411ab9ab6e949be468b039d0cb64b69ae410c2c1c6dbcef8915c623): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.350978 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(8a8b85b84411ab9ab6e949be468b039d0cb64b69ae410c2c1c6dbcef8915c623): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.351014 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(8a8b85b84411ab9ab6e949be468b039d0cb64b69ae410c2c1c6dbcef8915c623): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.351078 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-gcmgr_openshift-operators(5b2e021c-d93d-45b1-81be-040aa9ab8ada)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-gcmgr_openshift-operators(5b2e021c-d93d-45b1-81be-040aa9ab8ada)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-gcmgr_openshift-operators_5b2e021c-d93d-45b1-81be-040aa9ab8ada_0(8a8b85b84411ab9ab6e949be468b039d0cb64b69ae410c2c1c6dbcef8915c623): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" podUID="5b2e021c-d93d-45b1-81be-040aa9ab8ada" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.370449 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hkhz" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.370781 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hkhz" event={"ID":"9dfb6450-cf84-4fb9-a95d-00fef07961d4","Type":"ContainerDied","Data":"2bbc7ddf93cd688d418ac7f7ef18b1af8ddf6711fcee6c693f5ed23913976886"} Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.371006 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.371147 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.371284 4729 scope.go:117] "RemoveContainer" containerID="fde6be597d03838258bf417fc341b245400b300b30af1742c223fcf1411f61a2" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.392487 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(29d4bd9239a197b31ca6ead9e7e0c2d6289ea4563e070ecf1eb275c242edaf4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.392568 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(29d4bd9239a197b31ca6ead9e7e0c2d6289ea4563e070ecf1eb275c242edaf4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.392592 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(29d4bd9239a197b31ca6ead9e7e0c2d6289ea4563e070ecf1eb275c242edaf4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.392647 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators(009d21ee-b5c2-4d71-8a58-fc2643442532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators(009d21ee-b5c2-4d71-8a58-fc2643442532)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_openshift-operators_009d21ee-b5c2-4d71-8a58-fc2643442532_0(29d4bd9239a197b31ca6ead9e7e0c2d6289ea4563e070ecf1eb275c242edaf4a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" podUID="009d21ee-b5c2-4d71-8a58-fc2643442532" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.400714 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(9cb042f9a31d43780d4433204490de75f7b87ccce7bf263cb31bbe8b1e3b0b99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.400811 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(9cb042f9a31d43780d4433204490de75f7b87ccce7bf263cb31bbe8b1e3b0b99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.400859 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(9cb042f9a31d43780d4433204490de75f7b87ccce7bf263cb31bbe8b1e3b0b99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.400943 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-p5mb2_openshift-operators(a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-p5mb2_openshift-operators(a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-p5mb2_openshift-operators_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31_0(9cb042f9a31d43780d4433204490de75f7b87ccce7bf263cb31bbe8b1e3b0b99): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" podUID="a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433229 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(92cc1ec2207d58ed567d7ac7d0414f091102079efe75438d600e1acd8d2e1093): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433312 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(92cc1ec2207d58ed567d7ac7d0414f091102079efe75438d600e1acd8d2e1093): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433337 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(92cc1ec2207d58ed567d7ac7d0414f091102079efe75438d600e1acd8d2e1093): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433390 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators(e5a4281d-dad0-47ba-b48c-cb8a18c57552)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators(e5a4281d-dad0-47ba-b48c-cb8a18c57552)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-6jn5b_openshift-operators_e5a4281d-dad0-47ba-b48c-cb8a18c57552_0(92cc1ec2207d58ed567d7ac7d0414f091102079efe75438d600e1acd8d2e1093): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" podUID="e5a4281d-dad0-47ba-b48c-cb8a18c57552" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433652 4729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(c417921cabf4de2cdf65ae3cce74d93c0bfa896b636c460efb274ae120b2afbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433677 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(c417921cabf4de2cdf65ae3cce74d93c0bfa896b636c460efb274ae120b2afbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433698 4729 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(c417921cabf4de2cdf65ae3cce74d93c0bfa896b636c460efb274ae120b2afbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:29 crc kubenswrapper[4729]: E0127 14:19:29.433729 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators(64ad3df0-d3a7-446f-a7d9-6c4194d92071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators(64ad3df0-d3a7-446f-a7d9-6c4194d92071)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_openshift-operators_64ad3df0-d3a7-446f-a7d9-6c4194d92071_0(c417921cabf4de2cdf65ae3cce74d93c0bfa896b636c460efb274ae120b2afbc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" podUID="64ad3df0-d3a7-446f-a7d9-6c4194d92071" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.437174 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.470030 4729 scope.go:117] "RemoveContainer" containerID="b06c7c67685caca2a1729c8046317abb0b26448092a32013068fdf1cc4c03140" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.494760 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.500250 4729 scope.go:117] "RemoveContainer" containerID="0c8e45642850e74e18e50b0bb1db3ec8350e26ad467c4bb32cf58765a9377a18" Jan 27 14:19:29 crc kubenswrapper[4729]: I0127 14:19:29.502461 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9hkhz"] Jan 27 14:19:30 crc kubenswrapper[4729]: I0127 14:19:30.067605 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" path="/var/lib/kubelet/pods/9dfb6450-cf84-4fb9-a95d-00fef07961d4/volumes" Jan 27 14:19:40 crc kubenswrapper[4729]: I0127 14:19:40.050431 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:40 crc kubenswrapper[4729]: I0127 14:19:40.051603 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" Jan 27 14:19:40 crc kubenswrapper[4729]: I0127 14:19:40.312678 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv"] Jan 27 14:19:40 crc kubenswrapper[4729]: W0127 14:19:40.322632 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64ad3df0_d3a7_446f_a7d9_6c4194d92071.slice/crio-a94a9f4d8c1ca3164f96eaa1c40ce80e763d0ba500da580d8f20d6c43b85b45f WatchSource:0}: Error finding container a94a9f4d8c1ca3164f96eaa1c40ce80e763d0ba500da580d8f20d6c43b85b45f: Status 404 returned error can't find the container with id a94a9f4d8c1ca3164f96eaa1c40ce80e763d0ba500da580d8f20d6c43b85b45f Jan 27 14:19:40 crc kubenswrapper[4729]: I0127 14:19:40.435895 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" event={"ID":"64ad3df0-d3a7-446f-a7d9-6c4194d92071","Type":"ContainerStarted","Data":"a94a9f4d8c1ca3164f96eaa1c40ce80e763d0ba500da580d8f20d6c43b85b45f"} Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.050823 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.050860 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.050916 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.050917 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.051744 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.051946 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.051988 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.052067 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.414315 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-p5mb2"] Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.458741 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" event={"ID":"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31","Type":"ContainerStarted","Data":"21737b6793dfd1893aa19094ec54fd54d571361c4aa0e55eebe9a30c11a6e403"} Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.476147 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-gcmgr"] Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.538287 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b"] Jan 27 14:19:43 crc kubenswrapper[4729]: I0127 14:19:43.675827 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6"] Jan 27 14:19:46 crc kubenswrapper[4729]: W0127 14:19:46.366003 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b2e021c_d93d_45b1_81be_040aa9ab8ada.slice/crio-241b7592c8d68c0231a2b30f00d32b52a33b28e843cc978d8f2de995c0ee6a41 WatchSource:0}: Error finding container 241b7592c8d68c0231a2b30f00d32b52a33b28e843cc978d8f2de995c0ee6a41: Status 404 returned error can't find the container with id 241b7592c8d68c0231a2b30f00d32b52a33b28e843cc978d8f2de995c0ee6a41 Jan 27 14:19:46 crc kubenswrapper[4729]: W0127 14:19:46.374080 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5a4281d_dad0_47ba_b48c_cb8a18c57552.slice/crio-39377867f3cb2ad450967a7c404e24a4cf585bf13c527f08e0ab882bc4834b2e WatchSource:0}: Error finding container 39377867f3cb2ad450967a7c404e24a4cf585bf13c527f08e0ab882bc4834b2e: Status 404 returned error can't find the container with id 39377867f3cb2ad450967a7c404e24a4cf585bf13c527f08e0ab882bc4834b2e Jan 27 14:19:46 crc kubenswrapper[4729]: I0127 14:19:46.482933 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" event={"ID":"5b2e021c-d93d-45b1-81be-040aa9ab8ada","Type":"ContainerStarted","Data":"241b7592c8d68c0231a2b30f00d32b52a33b28e843cc978d8f2de995c0ee6a41"} Jan 27 14:19:46 crc kubenswrapper[4729]: I0127 14:19:46.484139 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" event={"ID":"e5a4281d-dad0-47ba-b48c-cb8a18c57552","Type":"ContainerStarted","Data":"39377867f3cb2ad450967a7c404e24a4cf585bf13c527f08e0ab882bc4834b2e"} Jan 27 14:19:46 crc kubenswrapper[4729]: I0127 14:19:46.486088 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" event={"ID":"009d21ee-b5c2-4d71-8a58-fc2643442532","Type":"ContainerStarted","Data":"8025bca18699d2b0f5184d08b7881e829e593a50d19d451f86b7a19ba33edb8b"} Jan 27 14:19:47 crc kubenswrapper[4729]: I0127 14:19:47.316584 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7t59w" Jan 27 14:19:47 crc kubenswrapper[4729]: I0127 14:19:47.516761 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" event={"ID":"64ad3df0-d3a7-446f-a7d9-6c4194d92071","Type":"ContainerStarted","Data":"bb11624d71f44cf28839f29c8d1f6fb944c2b3fa42a3f4a8e5730b09ad52aae2"} Jan 27 14:19:47 crc kubenswrapper[4729]: I0127 14:19:47.540335 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv" podStartSLOduration=18.016105802 podStartE2EDuration="24.540316798s" podCreationTimestamp="2026-01-27 14:19:23 +0000 UTC" firstStartedPulling="2026-01-27 14:19:40.326343479 +0000 UTC m=+866.910534483" lastFinishedPulling="2026-01-27 14:19:46.850554475 +0000 UTC m=+873.434745479" observedRunningTime="2026-01-27 14:19:47.536610835 +0000 UTC m=+874.120801859" watchObservedRunningTime="2026-01-27 14:19:47.540316798 +0000 UTC m=+874.124507802" Jan 27 14:19:48 crc kubenswrapper[4729]: I0127 14:19:48.525015 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" event={"ID":"009d21ee-b5c2-4d71-8a58-fc2643442532","Type":"ContainerStarted","Data":"4e4b704e8a81fd1d5c6436aa34dfcc2d404318cc68aee2c5d090fe9836928ed8"} Jan 27 14:19:48 crc kubenswrapper[4729]: I0127 14:19:48.548920 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-77f88549bc-k96w6" podStartSLOduration=24.33478193 podStartE2EDuration="25.548866486s" podCreationTimestamp="2026-01-27 14:19:23 +0000 UTC" firstStartedPulling="2026-01-27 14:19:46.368478269 +0000 UTC m=+872.952669273" lastFinishedPulling="2026-01-27 14:19:47.582562825 +0000 UTC m=+874.166753829" observedRunningTime="2026-01-27 14:19:48.544979198 +0000 UTC m=+875.129170222" watchObservedRunningTime="2026-01-27 14:19:48.548866486 +0000 UTC m=+875.133057510" Jan 27 14:19:50 crc kubenswrapper[4729]: I0127 14:19:50.539294 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" event={"ID":"a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31","Type":"ContainerStarted","Data":"1b7dba8eea055b9f4c05eff041e8b24283324a7fdd9af80b7a4d988f322d6ee2"} Jan 27 14:19:50 crc kubenswrapper[4729]: I0127 14:19:50.539672 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:19:50 crc kubenswrapper[4729]: I0127 14:19:50.561130 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" podStartSLOduration=19.971039855 podStartE2EDuration="26.561107876s" podCreationTimestamp="2026-01-27 14:19:24 +0000 UTC" firstStartedPulling="2026-01-27 14:19:43.452915585 +0000 UTC m=+870.037106589" lastFinishedPulling="2026-01-27 14:19:50.042983606 +0000 UTC m=+876.627174610" observedRunningTime="2026-01-27 14:19:50.555942422 +0000 UTC m=+877.140133446" watchObservedRunningTime="2026-01-27 14:19:50.561107876 +0000 UTC m=+877.145298880" Jan 27 14:19:51 crc kubenswrapper[4729]: I0127 14:19:51.548055 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" event={"ID":"e5a4281d-dad0-47ba-b48c-cb8a18c57552","Type":"ContainerStarted","Data":"97d8a688022ff151b94dce4f00b0af97ba4ec0c407cdef24f929d22604f0025d"} Jan 27 14:19:52 crc kubenswrapper[4729]: I0127 14:19:52.580071 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6jn5b" podStartSLOduration=25.470823862 podStartE2EDuration="29.580048253s" podCreationTimestamp="2026-01-27 14:19:23 +0000 UTC" firstStartedPulling="2026-01-27 14:19:46.382036818 +0000 UTC m=+872.966227822" lastFinishedPulling="2026-01-27 14:19:50.491261209 +0000 UTC m=+877.075452213" observedRunningTime="2026-01-27 14:19:52.569729735 +0000 UTC m=+879.153920739" watchObservedRunningTime="2026-01-27 14:19:52.580048253 +0000 UTC m=+879.164239257" Jan 27 14:19:52 crc kubenswrapper[4729]: I0127 14:19:52.655590 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:19:52 crc kubenswrapper[4729]: I0127 14:19:52.655697 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:19:57 crc kubenswrapper[4729]: I0127 14:19:57.601810 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" event={"ID":"5b2e021c-d93d-45b1-81be-040aa9ab8ada","Type":"ContainerStarted","Data":"37ed00a095588bc8cde0c7844b42e1f09675b03c4b874cd0e08e88ea75a0944d"} Jan 27 14:19:58 crc kubenswrapper[4729]: I0127 14:19:58.607557 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:19:58 crc kubenswrapper[4729]: I0127 14:19:58.627295 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" podStartSLOduration=23.650391926 podStartE2EDuration="34.627268876s" podCreationTimestamp="2026-01-27 14:19:24 +0000 UTC" firstStartedPulling="2026-01-27 14:19:46.368472769 +0000 UTC m=+872.952663773" lastFinishedPulling="2026-01-27 14:19:57.345349729 +0000 UTC m=+883.929540723" observedRunningTime="2026-01-27 14:19:58.622456711 +0000 UTC m=+885.206647715" watchObservedRunningTime="2026-01-27 14:19:58.627268876 +0000 UTC m=+885.211459880" Jan 27 14:19:58 crc kubenswrapper[4729]: I0127 14:19:58.630666 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-gcmgr" Jan 27 14:20:04 crc kubenswrapper[4729]: I0127 14:20:04.500608 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.673935 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pvns"] Jan 27 14:20:09 crc kubenswrapper[4729]: E0127 14:20:09.674526 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="registry-server" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.674544 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="registry-server" Jan 27 14:20:09 crc kubenswrapper[4729]: E0127 14:20:09.674556 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="extract-utilities" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.674563 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="extract-utilities" Jan 27 14:20:09 crc kubenswrapper[4729]: E0127 14:20:09.674582 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="extract-content" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.674587 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="extract-content" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.674686 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfb6450-cf84-4fb9-a95d-00fef07961d4" containerName="registry-server" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.675226 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.679718 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.679734 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6b2rz" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.679831 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d"] Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.679835 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.680606 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.685992 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-68w6c"] Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.687023 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-68w6c" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.713380 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fxjcg" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.713460 4729 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-r2t5t" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.747689 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pvns"] Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.752028 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjcv\" (UniqueName: \"kubernetes.io/projected/e0c7f80f-6d2c-4806-a4ea-192d40937ea3-kube-api-access-vqjcv\") pod \"cert-manager-webhook-687f57d79b-8pvns\" (UID: \"e0c7f80f-6d2c-4806-a4ea-192d40937ea3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.752118 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv8r\" (UniqueName: \"kubernetes.io/projected/fa223173-c466-46fc-a84d-25e55838018e-kube-api-access-fgv8r\") pod \"cert-manager-858654f9db-68w6c\" (UID: \"fa223173-c466-46fc-a84d-25e55838018e\") " pod="cert-manager/cert-manager-858654f9db-68w6c" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.752177 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjr72\" (UniqueName: \"kubernetes.io/projected/ccab51b9-7558-4837-b6f3-f7727538fbd5-kube-api-access-zjr72\") pod \"cert-manager-cainjector-cf98fcc89-kgx4d\" (UID: \"ccab51b9-7558-4837-b6f3-f7727538fbd5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.754933 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-68w6c"] Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.757659 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d"] Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.853410 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjcv\" (UniqueName: \"kubernetes.io/projected/e0c7f80f-6d2c-4806-a4ea-192d40937ea3-kube-api-access-vqjcv\") pod \"cert-manager-webhook-687f57d79b-8pvns\" (UID: \"e0c7f80f-6d2c-4806-a4ea-192d40937ea3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.853661 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgv8r\" (UniqueName: \"kubernetes.io/projected/fa223173-c466-46fc-a84d-25e55838018e-kube-api-access-fgv8r\") pod \"cert-manager-858654f9db-68w6c\" (UID: \"fa223173-c466-46fc-a84d-25e55838018e\") " pod="cert-manager/cert-manager-858654f9db-68w6c" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.853837 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjr72\" (UniqueName: \"kubernetes.io/projected/ccab51b9-7558-4837-b6f3-f7727538fbd5-kube-api-access-zjr72\") pod \"cert-manager-cainjector-cf98fcc89-kgx4d\" (UID: \"ccab51b9-7558-4837-b6f3-f7727538fbd5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.874801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjcv\" (UniqueName: \"kubernetes.io/projected/e0c7f80f-6d2c-4806-a4ea-192d40937ea3-kube-api-access-vqjcv\") pod \"cert-manager-webhook-687f57d79b-8pvns\" (UID: \"e0c7f80f-6d2c-4806-a4ea-192d40937ea3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.875577 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgv8r\" (UniqueName: \"kubernetes.io/projected/fa223173-c466-46fc-a84d-25e55838018e-kube-api-access-fgv8r\") pod \"cert-manager-858654f9db-68w6c\" (UID: \"fa223173-c466-46fc-a84d-25e55838018e\") " pod="cert-manager/cert-manager-858654f9db-68w6c" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.881260 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjr72\" (UniqueName: \"kubernetes.io/projected/ccab51b9-7558-4837-b6f3-f7727538fbd5-kube-api-access-zjr72\") pod \"cert-manager-cainjector-cf98fcc89-kgx4d\" (UID: \"ccab51b9-7558-4837-b6f3-f7727538fbd5\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" Jan 27 14:20:09 crc kubenswrapper[4729]: I0127 14:20:09.996198 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.025224 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.033540 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-68w6c" Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.456248 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pvns"] Jan 27 14:20:10 crc kubenswrapper[4729]: W0127 14:20:10.468205 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0c7f80f_6d2c_4806_a4ea_192d40937ea3.slice/crio-0a7880c83fd5b087c12369e0c8b25ffd5e67da118e71e938a02f566788091b12 WatchSource:0}: Error finding container 0a7880c83fd5b087c12369e0c8b25ffd5e67da118e71e938a02f566788091b12: Status 404 returned error can't find the container with id 0a7880c83fd5b087c12369e0c8b25ffd5e67da118e71e938a02f566788091b12 Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.533967 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-68w6c"] Jan 27 14:20:10 crc kubenswrapper[4729]: W0127 14:20:10.538357 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa223173_c466_46fc_a84d_25e55838018e.slice/crio-94d79b5a38997ee519c030de2175ea366b873557070cb6d3861a843f97606a5e WatchSource:0}: Error finding container 94d79b5a38997ee519c030de2175ea366b873557070cb6d3861a843f97606a5e: Status 404 returned error can't find the container with id 94d79b5a38997ee519c030de2175ea366b873557070cb6d3861a843f97606a5e Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.547842 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d"] Jan 27 14:20:10 crc kubenswrapper[4729]: W0127 14:20:10.555748 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccab51b9_7558_4837_b6f3_f7727538fbd5.slice/crio-6d7e7c51b7cae6225d3daeca3a530dcfb8236ebf44c9851e6119be99778f15b8 WatchSource:0}: Error finding container 6d7e7c51b7cae6225d3daeca3a530dcfb8236ebf44c9851e6119be99778f15b8: Status 404 returned error can't find the container with id 6d7e7c51b7cae6225d3daeca3a530dcfb8236ebf44c9851e6119be99778f15b8 Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.696201 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" event={"ID":"ccab51b9-7558-4837-b6f3-f7727538fbd5","Type":"ContainerStarted","Data":"6d7e7c51b7cae6225d3daeca3a530dcfb8236ebf44c9851e6119be99778f15b8"} Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.698135 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-68w6c" event={"ID":"fa223173-c466-46fc-a84d-25e55838018e","Type":"ContainerStarted","Data":"94d79b5a38997ee519c030de2175ea366b873557070cb6d3861a843f97606a5e"} Jan 27 14:20:10 crc kubenswrapper[4729]: I0127 14:20:10.699118 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" event={"ID":"e0c7f80f-6d2c-4806-a4ea-192d40937ea3","Type":"ContainerStarted","Data":"0a7880c83fd5b087c12369e0c8b25ffd5e67da118e71e938a02f566788091b12"} Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.723105 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.734778 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.740061 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.883650 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6wb\" (UniqueName: \"kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.883989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.884022 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.985073 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt6wb\" (UniqueName: \"kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.985393 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.985429 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.985988 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:11 crc kubenswrapper[4729]: I0127 14:20:11.986158 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.012992 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt6wb\" (UniqueName: \"kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb\") pod \"certified-operators-mwd2l\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.056865 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.352927 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:20:12 crc kubenswrapper[4729]: W0127 14:20:12.379168 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5ceaa91_d5cc_4ac9_8351_ad4ef924678e.slice/crio-501537f3b0cb364765d9e36b99a7eb020cd5792d6af47c975ecc9cc0a3cd1841 WatchSource:0}: Error finding container 501537f3b0cb364765d9e36b99a7eb020cd5792d6af47c975ecc9cc0a3cd1841: Status 404 returned error can't find the container with id 501537f3b0cb364765d9e36b99a7eb020cd5792d6af47c975ecc9cc0a3cd1841 Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.714617 4729 generic.go:334] "Generic (PLEG): container finished" podID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerID="dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422" exitCode=0 Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.714694 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerDied","Data":"dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422"} Jan 27 14:20:12 crc kubenswrapper[4729]: I0127 14:20:12.714756 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerStarted","Data":"501537f3b0cb364765d9e36b99a7eb020cd5792d6af47c975ecc9cc0a3cd1841"} Jan 27 14:20:22 crc kubenswrapper[4729]: I0127 14:20:22.655596 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:20:22 crc kubenswrapper[4729]: I0127 14:20:22.656497 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:20:22 crc kubenswrapper[4729]: I0127 14:20:22.972106 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:22 crc kubenswrapper[4729]: I0127 14:20:22.973399 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.002927 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.065100 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.065165 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.065290 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xjtv\" (UniqueName: \"kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.166293 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xjtv\" (UniqueName: \"kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.166698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.167269 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.167312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.167452 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.188750 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xjtv\" (UniqueName: \"kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv\") pod \"redhat-marketplace-wwns9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:23 crc kubenswrapper[4729]: I0127 14:20:23.301074 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:27 crc kubenswrapper[4729]: E0127 14:20:27.609533 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/jetstack/cert-manager-webhook:v1.19.2" Jan 27 14:20:27 crc kubenswrapper[4729]: E0127 14:20:27.610159 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:quay.io/jetstack/cert-manager-webhook:v1.19.2,Command:[],Args:[--v=2 --secure-port=10250 --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-dns-names=cert-manager-webhook --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook.$(POD_NAMESPACE).svc],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqjcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-687f57d79b-8pvns_cert-manager(e0c7f80f-6d2c-4806-a4ea-192d40937ea3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:20:27 crc kubenswrapper[4729]: E0127 14:20:27.611456 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" podUID="e0c7f80f-6d2c-4806-a4ea-192d40937ea3" Jan 27 14:20:27 crc kubenswrapper[4729]: E0127 14:20:27.817849 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/jetstack/cert-manager-webhook:v1.19.2\\\"\"" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" podUID="e0c7f80f-6d2c-4806-a4ea-192d40937ea3" Jan 27 14:20:28 crc kubenswrapper[4729]: I0127 14:20:28.192798 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:28 crc kubenswrapper[4729]: I0127 14:20:28.819305 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerStarted","Data":"9138301722c084f926909d52a7f54e77a6d512769aed7d951d3ae25144e376f7"} Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.863409 4729 generic.go:334] "Generic (PLEG): container finished" podID="22143c22-2594-4695-b203-e3530c0ce2a9" containerID="5a4354f5e515807cbdebfe51d992179ee6b9870905126f4bcdbbb0971a8f399d" exitCode=0 Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.863494 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerDied","Data":"5a4354f5e515807cbdebfe51d992179ee6b9870905126f4bcdbbb0971a8f399d"} Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.867858 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" event={"ID":"ccab51b9-7558-4837-b6f3-f7727538fbd5","Type":"ContainerStarted","Data":"6ae0029c399e8d3952784eb7484a2750379504538fa7ef2d648c167f9c5276af"} Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.873723 4729 generic.go:334] "Generic (PLEG): container finished" podID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerID="21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7" exitCode=0 Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.873858 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerDied","Data":"21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7"} Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.879648 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-68w6c" event={"ID":"fa223173-c466-46fc-a84d-25e55838018e","Type":"ContainerStarted","Data":"11a0dbc08e271ca07a80d2e4d432bdc2e8eeca43ab1134f67aad074f4ea9993c"} Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.923603 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-68w6c" podStartSLOduration=2.6338491299999998 podStartE2EDuration="25.923578631s" podCreationTimestamp="2026-01-27 14:20:09 +0000 UTC" firstStartedPulling="2026-01-27 14:20:10.54554929 +0000 UTC m=+897.129740294" lastFinishedPulling="2026-01-27 14:20:33.835278781 +0000 UTC m=+920.419469795" observedRunningTime="2026-01-27 14:20:34.90955919 +0000 UTC m=+921.493750194" watchObservedRunningTime="2026-01-27 14:20:34.923578631 +0000 UTC m=+921.507769635" Jan 27 14:20:34 crc kubenswrapper[4729]: I0127 14:20:34.938778 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kgx4d" podStartSLOduration=2.63487661 podStartE2EDuration="25.938750694s" podCreationTimestamp="2026-01-27 14:20:09 +0000 UTC" firstStartedPulling="2026-01-27 14:20:10.557947426 +0000 UTC m=+897.142138430" lastFinishedPulling="2026-01-27 14:20:33.86182149 +0000 UTC m=+920.446012514" observedRunningTime="2026-01-27 14:20:34.928492957 +0000 UTC m=+921.512683971" watchObservedRunningTime="2026-01-27 14:20:34.938750694 +0000 UTC m=+921.522941728" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.500687 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.502800 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.508939 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.567798 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gkf\" (UniqueName: \"kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.568084 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.568264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.669227 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gkf\" (UniqueName: \"kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.669570 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.669701 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.670189 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.670270 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.691231 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gkf\" (UniqueName: \"kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf\") pod \"community-operators-7b525\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:35 crc kubenswrapper[4729]: I0127 14:20:35.838398 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.319343 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.896861 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerStarted","Data":"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c"} Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.899105 4729 generic.go:334] "Generic (PLEG): container finished" podID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerID="260088a672f8ba7e7f8e1b346453ef2d982da6c1102f46aae5b68e7298971961" exitCode=0 Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.899137 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerDied","Data":"260088a672f8ba7e7f8e1b346453ef2d982da6c1102f46aae5b68e7298971961"} Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.899359 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerStarted","Data":"1c1276a901ff115ca5416aa3de1f14c351f2793ae90d121aa0880ea124ad0b8e"} Jan 27 14:20:36 crc kubenswrapper[4729]: I0127 14:20:36.916434 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwd2l" podStartSLOduration=2.2584233559999998 podStartE2EDuration="25.916413099s" podCreationTimestamp="2026-01-27 14:20:11 +0000 UTC" firstStartedPulling="2026-01-27 14:20:12.718587762 +0000 UTC m=+899.302778766" lastFinishedPulling="2026-01-27 14:20:36.376577505 +0000 UTC m=+922.960768509" observedRunningTime="2026-01-27 14:20:36.915461764 +0000 UTC m=+923.499652788" watchObservedRunningTime="2026-01-27 14:20:36.916413099 +0000 UTC m=+923.500604103" Jan 27 14:20:37 crc kubenswrapper[4729]: I0127 14:20:37.909501 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerStarted","Data":"fb8fab0c0b07833b82d9765da0db6e27ec7962556249da028bbed54ecdf8dede"} Jan 27 14:20:38 crc kubenswrapper[4729]: I0127 14:20:38.917833 4729 generic.go:334] "Generic (PLEG): container finished" podID="22143c22-2594-4695-b203-e3530c0ce2a9" containerID="fb8fab0c0b07833b82d9765da0db6e27ec7962556249da028bbed54ecdf8dede" exitCode=0 Jan 27 14:20:38 crc kubenswrapper[4729]: I0127 14:20:38.917971 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerDied","Data":"fb8fab0c0b07833b82d9765da0db6e27ec7962556249da028bbed54ecdf8dede"} Jan 27 14:20:38 crc kubenswrapper[4729]: I0127 14:20:38.921430 4729 generic.go:334] "Generic (PLEG): container finished" podID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerID="cf8d1670f1373b8da2ad9cbf4cbe829da77424deee28d25d311f7a76dd622e18" exitCode=0 Jan 27 14:20:38 crc kubenswrapper[4729]: I0127 14:20:38.921599 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerDied","Data":"cf8d1670f1373b8da2ad9cbf4cbe829da77424deee28d25d311f7a76dd622e18"} Jan 27 14:20:40 crc kubenswrapper[4729]: I0127 14:20:40.934900 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerStarted","Data":"0c7d106e5c011553553f27d205ea7a4b6e0c276b7223f882779f359d74e25583"} Jan 27 14:20:40 crc kubenswrapper[4729]: I0127 14:20:40.961659 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7b525" podStartSLOduration=2.683073555 podStartE2EDuration="5.961633597s" podCreationTimestamp="2026-01-27 14:20:35 +0000 UTC" firstStartedPulling="2026-01-27 14:20:36.900972069 +0000 UTC m=+923.485163073" lastFinishedPulling="2026-01-27 14:20:40.179532111 +0000 UTC m=+926.763723115" observedRunningTime="2026-01-27 14:20:40.955192278 +0000 UTC m=+927.539383312" watchObservedRunningTime="2026-01-27 14:20:40.961633597 +0000 UTC m=+927.545824601" Jan 27 14:20:41 crc kubenswrapper[4729]: I0127 14:20:41.947179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerStarted","Data":"ef8b69dd8214eed832f39c0176991ab04452364cebd03146866c392ea0ffb49d"} Jan 27 14:20:41 crc kubenswrapper[4729]: I0127 14:20:41.974752 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wwns9" podStartSLOduration=13.711606102 podStartE2EDuration="19.974730582s" podCreationTimestamp="2026-01-27 14:20:22 +0000 UTC" firstStartedPulling="2026-01-27 14:20:34.866231932 +0000 UTC m=+921.450422936" lastFinishedPulling="2026-01-27 14:20:41.129356412 +0000 UTC m=+927.713547416" observedRunningTime="2026-01-27 14:20:41.969439834 +0000 UTC m=+928.553630858" watchObservedRunningTime="2026-01-27 14:20:41.974730582 +0000 UTC m=+928.558921586" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.059520 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.059575 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.111560 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.954592 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" event={"ID":"e0c7f80f-6d2c-4806-a4ea-192d40937ea3","Type":"ContainerStarted","Data":"27aa89aae6d362cfeb89a61f3d9a9b60eb2186fe8f4906c84c97380e259d294e"} Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.955248 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.981721 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" podStartSLOduration=2.287064175 podStartE2EDuration="33.981699035s" podCreationTimestamp="2026-01-27 14:20:09 +0000 UTC" firstStartedPulling="2026-01-27 14:20:10.471839716 +0000 UTC m=+897.056030720" lastFinishedPulling="2026-01-27 14:20:42.166474576 +0000 UTC m=+928.750665580" observedRunningTime="2026-01-27 14:20:42.976529891 +0000 UTC m=+929.560720895" watchObservedRunningTime="2026-01-27 14:20:42.981699035 +0000 UTC m=+929.565890039" Jan 27 14:20:42 crc kubenswrapper[4729]: I0127 14:20:42.999247 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:20:43 crc kubenswrapper[4729]: I0127 14:20:43.302232 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:43 crc kubenswrapper[4729]: I0127 14:20:43.302283 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:43 crc kubenswrapper[4729]: I0127 14:20:43.342138 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:44 crc kubenswrapper[4729]: I0127 14:20:44.934839 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:20:45 crc kubenswrapper[4729]: I0127 14:20:45.293994 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:20:45 crc kubenswrapper[4729]: I0127 14:20:45.294339 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mkj77" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="registry-server" containerID="cri-o://5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" gracePeriod=2 Jan 27 14:20:45 crc kubenswrapper[4729]: I0127 14:20:45.838945 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:45 crc kubenswrapper[4729]: I0127 14:20:45.839933 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:45 crc kubenswrapper[4729]: I0127 14:20:45.900162 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:46 crc kubenswrapper[4729]: I0127 14:20:46.014666 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:46 crc kubenswrapper[4729]: I0127 14:20:46.987114 4729 generic.go:334] "Generic (PLEG): container finished" podID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerID="5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" exitCode=0 Jan 27 14:20:46 crc kubenswrapper[4729]: I0127 14:20:46.987209 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerDied","Data":"5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552"} Jan 27 14:20:47 crc kubenswrapper[4729]: E0127 14:20:47.637052 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552 is running failed: container process not found" containerID="5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:20:47 crc kubenswrapper[4729]: E0127 14:20:47.637417 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552 is running failed: container process not found" containerID="5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:20:47 crc kubenswrapper[4729]: E0127 14:20:47.637761 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552 is running failed: container process not found" containerID="5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" cmd=["grpc_health_probe","-addr=:50051"] Jan 27 14:20:47 crc kubenswrapper[4729]: E0127 14:20:47.637803 4729 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mkj77" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="registry-server" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.664211 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.846934 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities\") pod \"8791c2ee-d19b-4208-b783-7de3eab67cad\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.847319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content\") pod \"8791c2ee-d19b-4208-b783-7de3eab67cad\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.847368 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zpqr\" (UniqueName: \"kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr\") pod \"8791c2ee-d19b-4208-b783-7de3eab67cad\" (UID: \"8791c2ee-d19b-4208-b783-7de3eab67cad\") " Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.847922 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities" (OuterVolumeSpecName: "utilities") pod "8791c2ee-d19b-4208-b783-7de3eab67cad" (UID: "8791c2ee-d19b-4208-b783-7de3eab67cad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.871227 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr" (OuterVolumeSpecName: "kube-api-access-7zpqr") pod "8791c2ee-d19b-4208-b783-7de3eab67cad" (UID: "8791c2ee-d19b-4208-b783-7de3eab67cad"). InnerVolumeSpecName "kube-api-access-7zpqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.918223 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8791c2ee-d19b-4208-b783-7de3eab67cad" (UID: "8791c2ee-d19b-4208-b783-7de3eab67cad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.949205 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.949247 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8791c2ee-d19b-4208-b783-7de3eab67cad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.949262 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zpqr\" (UniqueName: \"kubernetes.io/projected/8791c2ee-d19b-4208-b783-7de3eab67cad-kube-api-access-7zpqr\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.996233 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mkj77" Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.996192 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mkj77" event={"ID":"8791c2ee-d19b-4208-b783-7de3eab67cad","Type":"ContainerDied","Data":"d606d103cebf7bcbca16e7ccab5c3b3e5c880f0d69210e9e53455978dd0194ca"} Jan 27 14:20:47 crc kubenswrapper[4729]: I0127 14:20:47.996392 4729 scope.go:117] "RemoveContainer" containerID="5342e09d158e6bc1de74d4379bd70f35b534f330d40b549b6fae30a99c155552" Jan 27 14:20:48 crc kubenswrapper[4729]: I0127 14:20:48.019844 4729 scope.go:117] "RemoveContainer" containerID="959243de95f2a21a5785c873b33907e2a251f765dbe4de4cc6c4b92ad21e9bc4" Jan 27 14:20:48 crc kubenswrapper[4729]: I0127 14:20:48.035049 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:20:48 crc kubenswrapper[4729]: I0127 14:20:48.059557 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mkj77"] Jan 27 14:20:48 crc kubenswrapper[4729]: I0127 14:20:48.061995 4729 scope.go:117] "RemoveContainer" containerID="ef9b148a107004cf9ab78f0217c439148797f5480e0ccb665d1baba4c974d00b" Jan 27 14:20:49 crc kubenswrapper[4729]: I0127 14:20:49.692525 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:49 crc kubenswrapper[4729]: I0127 14:20:49.693209 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7b525" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="registry-server" containerID="cri-o://0c7d106e5c011553553f27d205ea7a4b6e0c276b7223f882779f359d74e25583" gracePeriod=2 Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.011745 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8pvns" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.014497 4729 generic.go:334] "Generic (PLEG): container finished" podID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerID="0c7d106e5c011553553f27d205ea7a4b6e0c276b7223f882779f359d74e25583" exitCode=0 Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.014549 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerDied","Data":"0c7d106e5c011553553f27d205ea7a4b6e0c276b7223f882779f359d74e25583"} Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.071847 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" path="/var/lib/kubelet/pods/8791c2ee-d19b-4208-b783-7de3eab67cad/volumes" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.661536 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.798227 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities\") pod \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.798524 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content\") pod \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.798602 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7gkf\" (UniqueName: \"kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf\") pod \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\" (UID: \"551829fb-1ee3-4e3e-a512-6b27e103d8a1\") " Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.799381 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities" (OuterVolumeSpecName: "utilities") pod "551829fb-1ee3-4e3e-a512-6b27e103d8a1" (UID: "551829fb-1ee3-4e3e-a512-6b27e103d8a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.805058 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf" (OuterVolumeSpecName: "kube-api-access-f7gkf") pod "551829fb-1ee3-4e3e-a512-6b27e103d8a1" (UID: "551829fb-1ee3-4e3e-a512-6b27e103d8a1"). InnerVolumeSpecName "kube-api-access-f7gkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.844520 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "551829fb-1ee3-4e3e-a512-6b27e103d8a1" (UID: "551829fb-1ee3-4e3e-a512-6b27e103d8a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.900524 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.900563 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/551829fb-1ee3-4e3e-a512-6b27e103d8a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:50 crc kubenswrapper[4729]: I0127 14:20:50.900574 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7gkf\" (UniqueName: \"kubernetes.io/projected/551829fb-1ee3-4e3e-a512-6b27e103d8a1-kube-api-access-f7gkf\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.023049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b525" event={"ID":"551829fb-1ee3-4e3e-a512-6b27e103d8a1","Type":"ContainerDied","Data":"1c1276a901ff115ca5416aa3de1f14c351f2793ae90d121aa0880ea124ad0b8e"} Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.023104 4729 scope.go:117] "RemoveContainer" containerID="0c7d106e5c011553553f27d205ea7a4b6e0c276b7223f882779f359d74e25583" Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.023132 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b525" Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.049162 4729 scope.go:117] "RemoveContainer" containerID="cf8d1670f1373b8da2ad9cbf4cbe829da77424deee28d25d311f7a76dd622e18" Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.061725 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.068985 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7b525"] Jan 27 14:20:51 crc kubenswrapper[4729]: I0127 14:20:51.078110 4729 scope.go:117] "RemoveContainer" containerID="260088a672f8ba7e7f8e1b346453ef2d982da6c1102f46aae5b68e7298971961" Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.059531 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" path="/var/lib/kubelet/pods/551829fb-1ee3-4e3e-a512-6b27e103d8a1/volumes" Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.655645 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.656017 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.656151 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.656931 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:20:52 crc kubenswrapper[4729]: I0127 14:20:52.657096 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde" gracePeriod=600 Jan 27 14:20:53 crc kubenswrapper[4729]: I0127 14:20:53.046418 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde" exitCode=0 Jan 27 14:20:53 crc kubenswrapper[4729]: I0127 14:20:53.046507 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde"} Jan 27 14:20:53 crc kubenswrapper[4729]: I0127 14:20:53.046805 4729 scope.go:117] "RemoveContainer" containerID="c10316ef0408393982d83c857b8476129168afee8da189de5ad004580a185d5b" Jan 27 14:20:53 crc kubenswrapper[4729]: I0127 14:20:53.346140 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:53 crc kubenswrapper[4729]: I0127 14:20:53.888043 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:54 crc kubenswrapper[4729]: I0127 14:20:54.058007 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wwns9" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="registry-server" containerID="cri-o://ef8b69dd8214eed832f39c0176991ab04452364cebd03146866c392ea0ffb49d" gracePeriod=2 Jan 27 14:20:55 crc kubenswrapper[4729]: I0127 14:20:55.066047 4729 generic.go:334] "Generic (PLEG): container finished" podID="22143c22-2594-4695-b203-e3530c0ce2a9" containerID="ef8b69dd8214eed832f39c0176991ab04452364cebd03146866c392ea0ffb49d" exitCode=0 Jan 27 14:20:55 crc kubenswrapper[4729]: I0127 14:20:55.066122 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerDied","Data":"ef8b69dd8214eed832f39c0176991ab04452364cebd03146866c392ea0ffb49d"} Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.163861 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.281894 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xjtv\" (UniqueName: \"kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv\") pod \"22143c22-2594-4695-b203-e3530c0ce2a9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.281994 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content\") pod \"22143c22-2594-4695-b203-e3530c0ce2a9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.282053 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities\") pod \"22143c22-2594-4695-b203-e3530c0ce2a9\" (UID: \"22143c22-2594-4695-b203-e3530c0ce2a9\") " Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.282791 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities" (OuterVolumeSpecName: "utilities") pod "22143c22-2594-4695-b203-e3530c0ce2a9" (UID: "22143c22-2594-4695-b203-e3530c0ce2a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.287225 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv" (OuterVolumeSpecName: "kube-api-access-2xjtv") pod "22143c22-2594-4695-b203-e3530c0ce2a9" (UID: "22143c22-2594-4695-b203-e3530c0ce2a9"). InnerVolumeSpecName "kube-api-access-2xjtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.302377 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22143c22-2594-4695-b203-e3530c0ce2a9" (UID: "22143c22-2594-4695-b203-e3530c0ce2a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.383570 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xjtv\" (UniqueName: \"kubernetes.io/projected/22143c22-2594-4695-b203-e3530c0ce2a9-kube-api-access-2xjtv\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.383602 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:56 crc kubenswrapper[4729]: I0127 14:20:56.383611 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22143c22-2594-4695-b203-e3530c0ce2a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.085147 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65"} Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.088260 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwns9" event={"ID":"22143c22-2594-4695-b203-e3530c0ce2a9","Type":"ContainerDied","Data":"9138301722c084f926909d52a7f54e77a6d512769aed7d951d3ae25144e376f7"} Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.088303 4729 scope.go:117] "RemoveContainer" containerID="ef8b69dd8214eed832f39c0176991ab04452364cebd03146866c392ea0ffb49d" Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.088353 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwns9" Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.109150 4729 scope.go:117] "RemoveContainer" containerID="fb8fab0c0b07833b82d9765da0db6e27ec7962556249da028bbed54ecdf8dede" Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.132752 4729 scope.go:117] "RemoveContainer" containerID="5a4354f5e515807cbdebfe51d992179ee6b9870905126f4bcdbbb0971a8f399d" Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.149644 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:57 crc kubenswrapper[4729]: I0127 14:20:57.157419 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwns9"] Jan 27 14:20:58 crc kubenswrapper[4729]: I0127 14:20:58.058454 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" path="/var/lib/kubelet/pods/22143c22-2594-4695-b203-e3530c0ce2a9/volumes" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.748943 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm"] Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749640 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749652 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749666 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749673 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749687 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749695 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749704 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749710 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749718 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749723 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749734 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749739 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749748 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749754 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749765 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749770 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="extract-utilities" Jan 27 14:21:13 crc kubenswrapper[4729]: E0127 14:21:13.749817 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.749824 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="extract-content" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.750024 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="8791c2ee-d19b-4208-b783-7de3eab67cad" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.750040 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="22143c22-2594-4695-b203-e3530c0ce2a9" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.750053 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="551829fb-1ee3-4e3e-a512-6b27e103d8a1" containerName="registry-server" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.751095 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.753384 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.765048 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm"] Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.829690 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnlv4\" (UniqueName: \"kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.829729 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.829753 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.930845 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.931283 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnlv4\" (UniqueName: \"kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.931370 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.931598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.931724 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:13 crc kubenswrapper[4729]: I0127 14:21:13.949453 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnlv4\" (UniqueName: \"kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.076576 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.085567 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.124136 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6"] Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.126134 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.137818 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6"] Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.241327 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.241383 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.241443 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zh9\" (UniqueName: \"kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.342939 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.343002 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.343052 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zh9\" (UniqueName: \"kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.343758 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm"] Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.343889 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.344283 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.366677 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zh9\" (UniqueName: \"kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.474006 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:14 crc kubenswrapper[4729]: I0127 14:21:14.693345 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6"] Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.349362 4729 generic.go:334] "Generic (PLEG): container finished" podID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerID="1a2936dec52220de898c00c404aa1644ce5f13074aebaf3aaedfc88d7bea54ee" exitCode=0 Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.349436 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerDied","Data":"1a2936dec52220de898c00c404aa1644ce5f13074aebaf3aaedfc88d7bea54ee"} Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.349717 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerStarted","Data":"894bf5c52a7acbf3f048ed8f01cff15a615e518be0858c39c6385ced3eb580a9"} Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.352640 4729 generic.go:334] "Generic (PLEG): container finished" podID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerID="068f185e1aa6bf1dafdcef993807e4f9ac3573fb0197a0a9747a5888a93e8771" exitCode=0 Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.352686 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerDied","Data":"068f185e1aa6bf1dafdcef993807e4f9ac3573fb0197a0a9747a5888a93e8771"} Jan 27 14:21:15 crc kubenswrapper[4729]: I0127 14:21:15.352716 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerStarted","Data":"d99377cacfcd00cce4ce1037bb9ce7a6365d01afc0afda6435f8120de6f2b04b"} Jan 27 14:21:20 crc kubenswrapper[4729]: I0127 14:21:20.387320 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerStarted","Data":"289b5f318c5a437d5267fd51e9360e28cd3a6cd3da93a398e810ae25085a809b"} Jan 27 14:21:20 crc kubenswrapper[4729]: I0127 14:21:20.389992 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerStarted","Data":"d047b79eba5601760fe186fcc597a84282999f3b2c71abe8b43c407c79a06418"} Jan 27 14:21:22 crc kubenswrapper[4729]: I0127 14:21:22.411426 4729 generic.go:334] "Generic (PLEG): container finished" podID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerID="289b5f318c5a437d5267fd51e9360e28cd3a6cd3da93a398e810ae25085a809b" exitCode=0 Jan 27 14:21:22 crc kubenswrapper[4729]: I0127 14:21:22.411499 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerDied","Data":"289b5f318c5a437d5267fd51e9360e28cd3a6cd3da93a398e810ae25085a809b"} Jan 27 14:21:22 crc kubenswrapper[4729]: I0127 14:21:22.414149 4729 generic.go:334] "Generic (PLEG): container finished" podID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerID="d047b79eba5601760fe186fcc597a84282999f3b2c71abe8b43c407c79a06418" exitCode=0 Jan 27 14:21:22 crc kubenswrapper[4729]: I0127 14:21:22.414195 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerDied","Data":"d047b79eba5601760fe186fcc597a84282999f3b2c71abe8b43c407c79a06418"} Jan 27 14:21:23 crc kubenswrapper[4729]: I0127 14:21:23.427825 4729 generic.go:334] "Generic (PLEG): container finished" podID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerID="eadc387b898c329f66a2573af7a41c9c7a6821fcece3375f36e68a60a37d8de5" exitCode=0 Jan 27 14:21:23 crc kubenswrapper[4729]: I0127 14:21:23.427902 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerDied","Data":"eadc387b898c329f66a2573af7a41c9c7a6821fcece3375f36e68a60a37d8de5"} Jan 27 14:21:23 crc kubenswrapper[4729]: I0127 14:21:23.432368 4729 generic.go:334] "Generic (PLEG): container finished" podID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerID="7e5101905c642316cb67134db914693443cb71214608c94407555708aca048d1" exitCode=0 Jan 27 14:21:23 crc kubenswrapper[4729]: I0127 14:21:23.432422 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerDied","Data":"7e5101905c642316cb67134db914693443cb71214608c94407555708aca048d1"} Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.745852 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.751339 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891028 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle\") pod \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891098 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util\") pod \"a6125a70-d6bd-465f-85a9-6a39034b628b\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891167 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6zh9\" (UniqueName: \"kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9\") pod \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891222 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util\") pod \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\" (UID: \"60fb67db-16cd-4ee3-a6d5-68b8be36ace9\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891274 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnlv4\" (UniqueName: \"kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4\") pod \"a6125a70-d6bd-465f-85a9-6a39034b628b\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.891310 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle\") pod \"a6125a70-d6bd-465f-85a9-6a39034b628b\" (UID: \"a6125a70-d6bd-465f-85a9-6a39034b628b\") " Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.892587 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle" (OuterVolumeSpecName: "bundle") pod "a6125a70-d6bd-465f-85a9-6a39034b628b" (UID: "a6125a70-d6bd-465f-85a9-6a39034b628b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.896865 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9" (OuterVolumeSpecName: "kube-api-access-r6zh9") pod "60fb67db-16cd-4ee3-a6d5-68b8be36ace9" (UID: "60fb67db-16cd-4ee3-a6d5-68b8be36ace9"). InnerVolumeSpecName "kube-api-access-r6zh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.897855 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle" (OuterVolumeSpecName: "bundle") pod "60fb67db-16cd-4ee3-a6d5-68b8be36ace9" (UID: "60fb67db-16cd-4ee3-a6d5-68b8be36ace9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.898213 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4" (OuterVolumeSpecName: "kube-api-access-jnlv4") pod "a6125a70-d6bd-465f-85a9-6a39034b628b" (UID: "a6125a70-d6bd-465f-85a9-6a39034b628b"). InnerVolumeSpecName "kube-api-access-jnlv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.902795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util" (OuterVolumeSpecName: "util") pod "a6125a70-d6bd-465f-85a9-6a39034b628b" (UID: "a6125a70-d6bd-465f-85a9-6a39034b628b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.905016 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util" (OuterVolumeSpecName: "util") pod "60fb67db-16cd-4ee3-a6d5-68b8be36ace9" (UID: "60fb67db-16cd-4ee3-a6d5-68b8be36ace9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993688 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993728 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993736 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a6125a70-d6bd-465f-85a9-6a39034b628b-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993745 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6zh9\" (UniqueName: \"kubernetes.io/projected/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-kube-api-access-r6zh9\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993756 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60fb67db-16cd-4ee3-a6d5-68b8be36ace9-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:24 crc kubenswrapper[4729]: I0127 14:21:24.993764 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnlv4\" (UniqueName: \"kubernetes.io/projected/a6125a70-d6bd-465f-85a9-6a39034b628b-kube-api-access-jnlv4\") on node \"crc\" DevicePath \"\"" Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.446457 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" event={"ID":"a6125a70-d6bd-465f-85a9-6a39034b628b","Type":"ContainerDied","Data":"d99377cacfcd00cce4ce1037bb9ce7a6365d01afc0afda6435f8120de6f2b04b"} Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.446506 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99377cacfcd00cce4ce1037bb9ce7a6365d01afc0afda6435f8120de6f2b04b" Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.446526 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm" Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.448481 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" event={"ID":"60fb67db-16cd-4ee3-a6d5-68b8be36ace9","Type":"ContainerDied","Data":"894bf5c52a7acbf3f048ed8f01cff15a615e518be0858c39c6385ced3eb580a9"} Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.448530 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894bf5c52a7acbf3f048ed8f01cff15a615e518be0858c39c6385ced3eb580a9" Jan 27 14:21:25 crc kubenswrapper[4729]: I0127 14:21:25.448532 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489001 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg"] Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489788 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="util" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489801 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="util" Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489815 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="util" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489822 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="util" Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489832 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489841 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489848 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489854 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489892 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="pull" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489898 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="pull" Jan 27 14:21:32 crc kubenswrapper[4729]: E0127 14:21:32.489906 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="pull" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.489912 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="pull" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.490038 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6125a70-d6bd-465f-85a9-6a39034b628b" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.490051 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fb67db-16cd-4ee3-a6d5-68b8be36ace9" containerName="extract" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.490731 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.495361 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.495429 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-cvvcm" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.496863 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.496999 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.497027 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.502260 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.506957 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg"] Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.602160 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-manager-config\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.602229 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-apiservice-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.602251 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-webhook-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.602281 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8cgw\" (UniqueName: \"kubernetes.io/projected/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-kube-api-access-t8cgw\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.602397 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.703910 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-manager-config\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.704000 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-apiservice-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.704029 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-webhook-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.704077 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8cgw\" (UniqueName: \"kubernetes.io/projected/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-kube-api-access-t8cgw\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.704104 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.704981 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-manager-config\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.709549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.710018 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-webhook-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.716678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-apiservice-cert\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.740216 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8cgw\" (UniqueName: \"kubernetes.io/projected/cf09e55d-e675-4bbe-aca3-853b9bc46cbc-kube-api-access-t8cgw\") pod \"loki-operator-controller-manager-5975c77b68-sdbrg\" (UID: \"cf09e55d-e675-4bbe-aca3-853b9bc46cbc\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:32 crc kubenswrapper[4729]: I0127 14:21:32.806959 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:33 crc kubenswrapper[4729]: I0127 14:21:33.051539 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg"] Jan 27 14:21:33 crc kubenswrapper[4729]: I0127 14:21:33.506652 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" event={"ID":"cf09e55d-e675-4bbe-aca3-853b9bc46cbc","Type":"ContainerStarted","Data":"482b8fa23647078f72da80fdbf3d3d705290b650377f71a31d6394abb6d3ee59"} Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.405558 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq"] Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.408277 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.410967 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.411100 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-pczfs" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.411142 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.439611 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq"] Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.540015 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncflj\" (UniqueName: \"kubernetes.io/projected/1b55fd12-cb85-45bc-aad0-b2326d50aed1-kube-api-access-ncflj\") pod \"cluster-logging-operator-79cf69ddc8-6vccq\" (UID: \"1b55fd12-cb85-45bc-aad0-b2326d50aed1\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.641198 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncflj\" (UniqueName: \"kubernetes.io/projected/1b55fd12-cb85-45bc-aad0-b2326d50aed1-kube-api-access-ncflj\") pod \"cluster-logging-operator-79cf69ddc8-6vccq\" (UID: \"1b55fd12-cb85-45bc-aad0-b2326d50aed1\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.665325 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncflj\" (UniqueName: \"kubernetes.io/projected/1b55fd12-cb85-45bc-aad0-b2326d50aed1-kube-api-access-ncflj\") pod \"cluster-logging-operator-79cf69ddc8-6vccq\" (UID: \"1b55fd12-cb85-45bc-aad0-b2326d50aed1\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" Jan 27 14:21:39 crc kubenswrapper[4729]: I0127 14:21:39.742480 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" Jan 27 14:21:40 crc kubenswrapper[4729]: I0127 14:21:40.100011 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq"] Jan 27 14:21:40 crc kubenswrapper[4729]: I0127 14:21:40.563785 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" event={"ID":"1b55fd12-cb85-45bc-aad0-b2326d50aed1","Type":"ContainerStarted","Data":"92acdc3f9bfdf5ed16e176ff29fdd19462faff591b568cb3b2244c7856f27a8e"} Jan 27 14:21:41 crc kubenswrapper[4729]: I0127 14:21:41.571121 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" event={"ID":"cf09e55d-e675-4bbe-aca3-853b9bc46cbc","Type":"ContainerStarted","Data":"c9d97da573b60b7393c9c37b34dc8c2e75e9e725f2530f27a2bb42f559786026"} Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.683008 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" event={"ID":"1b55fd12-cb85-45bc-aad0-b2326d50aed1","Type":"ContainerStarted","Data":"d77508c9806052b31d50cb37c1946b8ddd94d42598062b61eed8eb79ffed87a3"} Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.685863 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" event={"ID":"cf09e55d-e675-4bbe-aca3-853b9bc46cbc","Type":"ContainerStarted","Data":"9fba38d31490b2959b5462e5593bbfe25169f4794a9f310c6ed6159ba23569be"} Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.686163 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.689425 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.720923 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-6vccq" podStartSLOduration=2.347792821 podStartE2EDuration="14.720899744s" podCreationTimestamp="2026-01-27 14:21:39 +0000 UTC" firstStartedPulling="2026-01-27 14:21:40.122096688 +0000 UTC m=+986.706287702" lastFinishedPulling="2026-01-27 14:21:52.495203621 +0000 UTC m=+999.079394625" observedRunningTime="2026-01-27 14:21:53.715599705 +0000 UTC m=+1000.299790719" watchObservedRunningTime="2026-01-27 14:21:53.720899744 +0000 UTC m=+1000.305090758" Jan 27 14:21:53 crc kubenswrapper[4729]: I0127 14:21:53.755560 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" podStartSLOduration=1.977584582 podStartE2EDuration="21.755544022s" podCreationTimestamp="2026-01-27 14:21:32 +0000 UTC" firstStartedPulling="2026-01-27 14:21:33.059730915 +0000 UTC m=+979.643921919" lastFinishedPulling="2026-01-27 14:21:52.837690355 +0000 UTC m=+999.421881359" observedRunningTime="2026-01-27 14:21:53.752024944 +0000 UTC m=+1000.336215948" watchObservedRunningTime="2026-01-27 14:21:53.755544022 +0000 UTC m=+1000.339735026" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.534765 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.536231 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.538558 4729 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-dmkds" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.538976 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.539195 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.542772 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.644951 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e3663ce7-727e-4179-83ce-941115958404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3663ce7-727e-4179-83ce-941115958404\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.645037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vfft\" (UniqueName: \"kubernetes.io/projected/ee8a7a98-2cea-4e17-9754-2505d70ca626-kube-api-access-8vfft\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.746937 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e3663ce7-727e-4179-83ce-941115958404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3663ce7-727e-4179-83ce-941115958404\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.747030 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vfft\" (UniqueName: \"kubernetes.io/projected/ee8a7a98-2cea-4e17-9754-2505d70ca626-kube-api-access-8vfft\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.749928 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.750055 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e3663ce7-727e-4179-83ce-941115958404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3663ce7-727e-4179-83ce-941115958404\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27aab4aeed34401f1b80dcf467c1dd036b08660f3b084c541189cbf795130d8b/globalmount\"" pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.766930 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vfft\" (UniqueName: \"kubernetes.io/projected/ee8a7a98-2cea-4e17-9754-2505d70ca626-kube-api-access-8vfft\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.772153 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e3663ce7-727e-4179-83ce-941115958404\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e3663ce7-727e-4179-83ce-941115958404\") pod \"minio\" (UID: \"ee8a7a98-2cea-4e17-9754-2505d70ca626\") " pod="minio-dev/minio" Jan 27 14:21:59 crc kubenswrapper[4729]: I0127 14:21:59.863016 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 27 14:22:00 crc kubenswrapper[4729]: I0127 14:22:00.298857 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 27 14:22:00 crc kubenswrapper[4729]: I0127 14:22:00.725461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ee8a7a98-2cea-4e17-9754-2505d70ca626","Type":"ContainerStarted","Data":"ce99aeebc2edc056a50933d6bf86285a13d3610beecdc0a72f75aea02c5ade99"} Jan 27 14:22:04 crc kubenswrapper[4729]: I0127 14:22:04.754362 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ee8a7a98-2cea-4e17-9754-2505d70ca626","Type":"ContainerStarted","Data":"538a2d73b71d78ecacb28a1dbf63456992b643dbbb7e4be5efc9ed222c06d48e"} Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.135721 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=12.712527221 podStartE2EDuration="16.135697798s" podCreationTimestamp="2026-01-27 14:21:56 +0000 UTC" firstStartedPulling="2026-01-27 14:22:00.2982801 +0000 UTC m=+1006.882471104" lastFinishedPulling="2026-01-27 14:22:03.721450677 +0000 UTC m=+1010.305641681" observedRunningTime="2026-01-27 14:22:04.788151254 +0000 UTC m=+1011.372342278" watchObservedRunningTime="2026-01-27 14:22:12.135697798 +0000 UTC m=+1018.719888822" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.141566 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.144002 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.147182 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.147450 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.147594 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-4mfbz" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.147474 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.148927 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.157810 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.288136 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-jk5rc"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.289128 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.296826 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.297102 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.297258 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.309527 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.309577 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-config\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.309611 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.309665 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.309761 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szpkd\" (UniqueName: \"kubernetes.io/projected/c05d5a86-89ad-486f-b7dd-404906e2ae3b-kube-api-access-szpkd\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.318201 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-jk5rc"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.410795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szpkd\" (UniqueName: \"kubernetes.io/projected/c05d5a86-89ad-486f-b7dd-404906e2ae3b-kube-api-access-szpkd\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.410852 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.410931 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-config\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.410955 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n6lv\" (UniqueName: \"kubernetes.io/projected/c529bcb3-c119-47c9-8311-53d2c13f5ddb-kube-api-access-8n6lv\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411025 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411052 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-config\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411077 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411139 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411170 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411242 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.411277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-s3\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.412338 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-config\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.412984 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.419702 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.438411 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c05d5a86-89ad-486f-b7dd-404906e2ae3b-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.461958 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szpkd\" (UniqueName: \"kubernetes.io/projected/c05d5a86-89ad-486f-b7dd-404906e2ae3b-kube-api-access-szpkd\") pod \"logging-loki-distributor-5f678c8dd6-c62w8\" (UID: \"c05d5a86-89ad-486f-b7dd-404906e2ae3b\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.475002 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.476147 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.480259 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.484466 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.484798 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513058 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513107 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-s3\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513151 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513185 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-config\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513201 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n6lv\" (UniqueName: \"kubernetes.io/projected/c529bcb3-c119-47c9-8311-53d2c13f5ddb-kube-api-access-8n6lv\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513235 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.513659 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.519816 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-config\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.527156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-s3\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.543827 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.544524 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.545288 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/c529bcb3-c119-47c9-8311-53d2c13f5ddb-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.572802 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n6lv\" (UniqueName: \"kubernetes.io/projected/c529bcb3-c119-47c9-8311-53d2c13f5ddb-kube-api-access-8n6lv\") pod \"logging-loki-querier-76788598db-jk5rc\" (UID: \"c529bcb3-c119-47c9-8311-53d2c13f5ddb\") " pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.611818 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.615517 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.615703 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsprw\" (UniqueName: \"kubernetes.io/projected/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-kube-api-access-dsprw\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.615870 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.616073 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-config\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.616168 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.717775 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.717833 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-config\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.717862 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.717930 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.717998 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsprw\" (UniqueName: \"kubernetes.io/projected/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-kube-api-access-dsprw\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.719912 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.721911 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-config\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.723603 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.726099 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.757412 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsprw\" (UniqueName: \"kubernetes.io/projected/0c35c4d5-cfb1-4d36-b502-5a9102ac0886-kube-api-access-dsprw\") pod \"logging-loki-query-frontend-69d9546745-g8jsr\" (UID: \"0c35c4d5-cfb1-4d36-b502-5a9102ac0886\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.757663 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.773328 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.773617 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.783465 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-6pwft" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.783937 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.783972 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j"] Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.784057 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.793749 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.793842 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.793966 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.793998 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.794041 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.891825 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921470 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921521 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921554 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-rbac\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921571 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-rbac\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921591 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921612 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921633 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921653 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921672 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvtr\" (UniqueName: \"kubernetes.io/projected/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-kube-api-access-cxvtr\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921687 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tenants\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921705 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tenants\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921738 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2br2x\" (UniqueName: \"kubernetes.io/projected/f7d912e8-1da3-439c-9e59-66145d48e35c-kube-api-access-2br2x\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921759 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921800 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:12 crc kubenswrapper[4729]: I0127 14:22:12.921826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.024113 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.024218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025312 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025378 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025733 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025768 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025920 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-rbac\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025945 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-rbac\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.025999 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026023 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026072 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026101 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026141 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvtr\" (UniqueName: \"kubernetes.io/projected/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-kube-api-access-cxvtr\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tenants\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026181 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tenants\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026248 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.026266 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2br2x\" (UniqueName: \"kubernetes.io/projected/f7d912e8-1da3-439c-9e59-66145d48e35c-kube-api-access-2br2x\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.027734 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-rbac\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.027939 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.028670 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-rbac\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.029391 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.029974 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-lokistack-gateway\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.031855 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tenants\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.032517 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.032918 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.033809 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.034234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tenants\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.034357 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-tls-secret\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.037008 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f7d912e8-1da3-439c-9e59-66145d48e35c-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.048174 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2br2x\" (UniqueName: \"kubernetes.io/projected/f7d912e8-1da3-439c-9e59-66145d48e35c-kube-api-access-2br2x\") pod \"logging-loki-gateway-5955fd6cd7-jf45j\" (UID: \"f7d912e8-1da3-439c-9e59-66145d48e35c\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.048419 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvtr\" (UniqueName: \"kubernetes.io/projected/0c13a35c-2b09-4ffa-a6e5-10ba4311f962-kube-api-access-cxvtr\") pod \"logging-loki-gateway-5955fd6cd7-hds4m\" (UID: \"0c13a35c-2b09-4ffa-a6e5-10ba4311f962\") " pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.132411 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.134066 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.160233 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8"] Jan 27 14:22:13 crc kubenswrapper[4729]: W0127 14:22:13.192053 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc05d5a86_89ad_486f_b7dd_404906e2ae3b.slice/crio-8bf07cba0fbd4e4cc7091890aa72881e26d062a0f31b3421d0d658cda0755a58 WatchSource:0}: Error finding container 8bf07cba0fbd4e4cc7091890aa72881e26d062a0f31b3421d0d658cda0755a58: Status 404 returned error can't find the container with id 8bf07cba0fbd4e4cc7091890aa72881e26d062a0f31b3421d0d658cda0755a58 Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.195161 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr"] Jan 27 14:22:13 crc kubenswrapper[4729]: W0127 14:22:13.215906 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c35c4d5_cfb1_4d36_b502_5a9102ac0886.slice/crio-01ac691bbb13e3143c2e63b5b4c24d085a7738bc7689588d07c937d91359a3e5 WatchSource:0}: Error finding container 01ac691bbb13e3143c2e63b5b4c24d085a7738bc7689588d07c937d91359a3e5: Status 404 returned error can't find the container with id 01ac691bbb13e3143c2e63b5b4c24d085a7738bc7689588d07c937d91359a3e5 Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.219825 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-jk5rc"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.288932 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.289916 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.292279 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.292505 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.304750 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.367867 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.368915 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.371013 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.371762 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.388374 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434625 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-config\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434691 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4mc\" (UniqueName: \"kubernetes.io/projected/7f768b2c-e000-4052-9e92-82a3bde68514-kube-api-access-8t4mc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434722 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434793 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434832 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434889 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434914 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.434954 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536709 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlvl\" (UniqueName: \"kubernetes.io/projected/cc44c481-9e30-42f7-883b-209184e04fba-kube-api-access-gxlvl\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536758 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536783 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536816 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04bba550-f5d8-4c81-a285-67626dede731\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04bba550-f5d8-4c81-a285-67626dede731\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536841 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-config\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4mc\" (UniqueName: \"kubernetes.io/projected/7f768b2c-e000-4052-9e92-82a3bde68514-kube-api-access-8t4mc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.536995 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537052 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537095 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537162 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537187 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537218 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537241 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-config\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537274 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537305 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.537976 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-config\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.538612 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.541211 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.541239 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac573500ace0a6215673668706f6408e1e450e4b8b5b0d58fdb8dc17ba7cd44a/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.541841 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.543433 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.543470 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9acd42a7f1c4dfce296cb5114dd68b39e47c731064361e01807480b4015fc30d/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.544156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.557563 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.558862 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.559540 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4mc\" (UniqueName: \"kubernetes.io/projected/7f768b2c-e000-4052-9e92-82a3bde68514-kube-api-access-8t4mc\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.567284 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/7f768b2c-e000-4052-9e92-82a3bde68514-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.569266 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.569461 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.575091 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.583469 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ddb8c532-d0ed-48f8-a4dd-4fc4c80cf34b\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.583958 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4bee3524-f3d1-490d-8fde-e22a46f1872f\") pod \"logging-loki-ingester-0\" (UID: \"7f768b2c-e000-4052-9e92-82a3bde68514\") " pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.611945 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639159 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639540 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-config\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639602 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639651 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gd6c\" (UniqueName: \"kubernetes.io/projected/cb7f1542-ef3d-4033-9345-6c504620a57e-kube-api-access-4gd6c\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639701 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639744 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639800 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-87e8653d-764d-4df3-b442-8df89029cf2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87e8653d-764d-4df3-b442-8df89029cf2d\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639906 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639949 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.639996 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxlvl\" (UniqueName: \"kubernetes.io/projected/cc44c481-9e30-42f7-883b-209184e04fba-kube-api-access-gxlvl\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.640050 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.640090 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.640132 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04bba550-f5d8-4c81-a285-67626dede731\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04bba550-f5d8-4c81-a285-67626dede731\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.640190 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.641642 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-config\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.642816 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.645697 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.646859 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.647575 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cc44c481-9e30-42f7-883b-209184e04fba-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.649644 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.649689 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04bba550-f5d8-4c81-a285-67626dede731\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04bba550-f5d8-4c81-a285-67626dede731\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02fd16ff85172da81de9e7c99884eca194572caf2506ecf11e08d2572f7b9a72/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.673973 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.678980 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxlvl\" (UniqueName: \"kubernetes.io/projected/cc44c481-9e30-42f7-883b-209184e04fba-kube-api-access-gxlvl\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.679607 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04bba550-f5d8-4c81-a285-67626dede731\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04bba550-f5d8-4c81-a285-67626dede731\") pod \"logging-loki-compactor-0\" (UID: \"cc44c481-9e30-42f7-883b-209184e04fba\") " pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.695767 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742430 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742494 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742523 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gd6c\" (UniqueName: \"kubernetes.io/projected/cb7f1542-ef3d-4033-9345-6c504620a57e-kube-api-access-4gd6c\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.742561 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.743369 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-87e8653d-764d-4df3-b442-8df89029cf2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87e8653d-764d-4df3-b442-8df89029cf2d\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.744244 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.745904 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7f1542-ef3d-4033-9345-6c504620a57e-config\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.746052 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j"] Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.747996 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.750406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.750672 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.750708 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-87e8653d-764d-4df3-b442-8df89029cf2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87e8653d-764d-4df3-b442-8df89029cf2d\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d42da3f8a3e5ed78beda3073acf4ce0685f932d050dd6e52e7fdd8a9dfa4598e/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.750991 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/cb7f1542-ef3d-4033-9345-6c504620a57e-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.760321 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gd6c\" (UniqueName: \"kubernetes.io/projected/cb7f1542-ef3d-4033-9345-6c504620a57e-kube-api-access-4gd6c\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.795426 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-87e8653d-764d-4df3-b442-8df89029cf2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-87e8653d-764d-4df3-b442-8df89029cf2d\") pod \"logging-loki-index-gateway-0\" (UID: \"cb7f1542-ef3d-4033-9345-6c504620a57e\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.843077 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" event={"ID":"c529bcb3-c119-47c9-8311-53d2c13f5ddb","Type":"ContainerStarted","Data":"7f63a21e9574fcb6e5e8388f85848af5382d53cd191ef6c1103bf01330e8e833"} Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.856538 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" event={"ID":"c05d5a86-89ad-486f-b7dd-404906e2ae3b","Type":"ContainerStarted","Data":"8bf07cba0fbd4e4cc7091890aa72881e26d062a0f31b3421d0d658cda0755a58"} Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.859635 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" event={"ID":"f7d912e8-1da3-439c-9e59-66145d48e35c","Type":"ContainerStarted","Data":"4e5b4fc63e07b17050faf5ad9034a8549f9fd023ae8e8addb87e38fd2f4dc283"} Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.861911 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" event={"ID":"0c35c4d5-cfb1-4d36-b502-5a9102ac0886","Type":"ContainerStarted","Data":"01ac691bbb13e3143c2e63b5b4c24d085a7738bc7689588d07c937d91359a3e5"} Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.863372 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" event={"ID":"0c13a35c-2b09-4ffa-a6e5-10ba4311f962","Type":"ContainerStarted","Data":"02984c09576ad31ba2ee16d85ded0ebf95d52a7c49e30d859a5b0d51109e39eb"} Jan 27 14:22:13 crc kubenswrapper[4729]: I0127 14:22:13.904199 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:14 crc kubenswrapper[4729]: W0127 14:22:14.085103 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f768b2c_e000_4052_9e92_82a3bde68514.slice/crio-476fb6d7531414d2a43c92782b241251c6e44c16164986343105002bd2a2b472 WatchSource:0}: Error finding container 476fb6d7531414d2a43c92782b241251c6e44c16164986343105002bd2a2b472: Status 404 returned error can't find the container with id 476fb6d7531414d2a43c92782b241251c6e44c16164986343105002bd2a2b472 Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.091128 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 27 14:22:14 crc kubenswrapper[4729]: W0127 14:22:14.162153 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc44c481_9e30_42f7_883b_209184e04fba.slice/crio-b1161508271b23ee734ba7502b8d7c7e91923e96930bf5d40a264a9d80d80eb1 WatchSource:0}: Error finding container b1161508271b23ee734ba7502b8d7c7e91923e96930bf5d40a264a9d80d80eb1: Status 404 returned error can't find the container with id b1161508271b23ee734ba7502b8d7c7e91923e96930bf5d40a264a9d80d80eb1 Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.167974 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.310109 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 27 14:22:14 crc kubenswrapper[4729]: W0127 14:22:14.314801 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7f1542_ef3d_4033_9345_6c504620a57e.slice/crio-debfd7afe41911b7538e29d621f7b97e7e834fd50c9b6480995212b30dcd8d4a WatchSource:0}: Error finding container debfd7afe41911b7538e29d621f7b97e7e834fd50c9b6480995212b30dcd8d4a: Status 404 returned error can't find the container with id debfd7afe41911b7538e29d621f7b97e7e834fd50c9b6480995212b30dcd8d4a Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.887094 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"cb7f1542-ef3d-4033-9345-6c504620a57e","Type":"ContainerStarted","Data":"debfd7afe41911b7538e29d621f7b97e7e834fd50c9b6480995212b30dcd8d4a"} Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.889121 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"7f768b2c-e000-4052-9e92-82a3bde68514","Type":"ContainerStarted","Data":"476fb6d7531414d2a43c92782b241251c6e44c16164986343105002bd2a2b472"} Jan 27 14:22:14 crc kubenswrapper[4729]: I0127 14:22:14.891653 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cc44c481-9e30-42f7-883b-209184e04fba","Type":"ContainerStarted","Data":"b1161508271b23ee734ba7502b8d7c7e91923e96930bf5d40a264a9d80d80eb1"} Jan 27 14:22:29 crc kubenswrapper[4729]: E0127 14:22:29.833966 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459" Jan 27 14:22:29 crc kubenswrapper[4729]: E0127 14:22:29.834685 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxlvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-compactor-0_openshift-logging(cc44c481-9e30-42f7-883b-209184e04fba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:22:29 crc kubenswrapper[4729]: E0127 14:22:29.835906 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/logging-loki-compactor-0" podUID="cc44c481-9e30-42f7-883b-209184e04fba" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.100493 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.100698 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t4mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-ingester-0_openshift-logging(7f768b2c-e000-4052-9e92-82a3bde68514): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.101888 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.599551 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.600309 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gd6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-index-gateway-0_openshift-logging(cb7f1542-ef3d-4033-9345-6c504620a57e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.601807 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/logging-loki-index-gateway-0" podUID="cb7f1542-ef3d-4033-9345-6c504620a57e" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.849128 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459\\\"\"" pod="openshift-logging/logging-loki-compactor-0" podUID="cc44c481-9e30-42f7-883b-209184e04fba" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.957484 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.957726 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:logging-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n6lv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-querier-76788598db-jk5rc_openshift-logging(c529bcb3-c119-47c9-8311-53d2c13f5ddb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:22:30 crc kubenswrapper[4729]: E0127 14:22:30.958990 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" podUID="c529bcb3-c119-47c9-8311-53d2c13f5ddb" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.538418 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459\\\"\"" pod="openshift-logging/logging-loki-index-gateway-0" podUID="cb7f1542-ef3d-4033-9345-6c504620a57e" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.580839 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459\\\"\"" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" podUID="c529bcb3-c119-47c9-8311-53d2c13f5ddb" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.580845 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459\\\"\"" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.718326 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.719302 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:8c073f8de8288bf63489a400d7f93e96562652832965d33e0515f48d46282459,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szpkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-distributor-5f678c8dd6-c62w8_openshift-logging(c05d5a86-89ad-486f-b7dd-404906e2ae3b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:22:31 crc kubenswrapper[4729]: E0127 14:22:31.720613 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" podUID="c05d5a86-89ad-486f-b7dd-404906e2ae3b" Jan 27 14:22:32 crc kubenswrapper[4729]: I0127 14:22:32.014774 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" event={"ID":"0c35c4d5-cfb1-4d36-b502-5a9102ac0886","Type":"ContainerStarted","Data":"7cb20ff6cb1feafeb4df2420964fb2c8e7eb9d10ab45b274f3a46480a3be10d8"} Jan 27 14:22:32 crc kubenswrapper[4729]: I0127 14:22:32.014997 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:22:32 crc kubenswrapper[4729]: I0127 14:22:32.017029 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" event={"ID":"0c13a35c-2b09-4ffa-a6e5-10ba4311f962","Type":"ContainerStarted","Data":"ef13c8e9338a453d22ceed38d1b7d79d52281e2c1602839ee3e06cd8a89f96a8"} Jan 27 14:22:32 crc kubenswrapper[4729]: I0127 14:22:32.018756 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" event={"ID":"f7d912e8-1da3-439c-9e59-66145d48e35c","Type":"ContainerStarted","Data":"69f0e3dc041cf6c971ae220322335a528a436c4242873d5ff3f1603a57a8c455"} Jan 27 14:22:32 crc kubenswrapper[4729]: I0127 14:22:32.041128 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" podStartSLOduration=1.51199645 podStartE2EDuration="20.041105639s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:13.219980387 +0000 UTC m=+1019.804171391" lastFinishedPulling="2026-01-27 14:22:31.749089576 +0000 UTC m=+1038.333280580" observedRunningTime="2026-01-27 14:22:32.039273867 +0000 UTC m=+1038.623464891" watchObservedRunningTime="2026-01-27 14:22:32.041105639 +0000 UTC m=+1038.625296653" Jan 27 14:22:33 crc kubenswrapper[4729]: I0127 14:22:33.027416 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" event={"ID":"c05d5a86-89ad-486f-b7dd-404906e2ae3b","Type":"ContainerStarted","Data":"fcee44f0abc3495120de6a7d07c724345427912ad60005fdc78fc2439a241ef2"} Jan 27 14:22:33 crc kubenswrapper[4729]: I0127 14:22:33.028628 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:33 crc kubenswrapper[4729]: I0127 14:22:33.055251 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" podStartSLOduration=-9223372015.799553 podStartE2EDuration="21.055223046s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:13.195689118 +0000 UTC m=+1019.779880122" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:22:33.049303981 +0000 UTC m=+1039.633495005" watchObservedRunningTime="2026-01-27 14:22:33.055223046 +0000 UTC m=+1039.639414050" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.036220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" event={"ID":"0c13a35c-2b09-4ffa-a6e5-10ba4311f962","Type":"ContainerStarted","Data":"e9ba8e474b1eac09b93c253d8501e26043c881282d914c521cdc356463fcb54b"} Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.037140 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.037172 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.038052 4729 patch_prober.go:28] interesting pod/logging-loki-gateway-5955fd6cd7-hds4m container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": dial tcp 10.217.0.56:8083: connect: connection refused" start-of-body= Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.038092 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" podUID="0c13a35c-2b09-4ffa-a6e5-10ba4311f962" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": dial tcp 10.217.0.56:8083: connect: connection refused" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.038441 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" event={"ID":"f7d912e8-1da3-439c-9e59-66145d48e35c","Type":"ContainerStarted","Data":"61ed417965807c7574b657ca2849edabde0bf63b0fba4c0a886fff4d1202cf59"} Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.039239 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.048251 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.057914 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" podStartSLOduration=1.9369237080000001 podStartE2EDuration="22.057894323s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:13.687010732 +0000 UTC m=+1020.271201736" lastFinishedPulling="2026-01-27 14:22:33.807981337 +0000 UTC m=+1040.392172351" observedRunningTime="2026-01-27 14:22:34.054795917 +0000 UTC m=+1040.638986921" watchObservedRunningTime="2026-01-27 14:22:34.057894323 +0000 UTC m=+1040.642085347" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.063420 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:34 crc kubenswrapper[4729]: I0127 14:22:34.112983 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" podStartSLOduration=2.076440988 podStartE2EDuration="22.112957732s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:13.765709202 +0000 UTC m=+1020.349900206" lastFinishedPulling="2026-01-27 14:22:33.802225946 +0000 UTC m=+1040.386416950" observedRunningTime="2026-01-27 14:22:34.104021253 +0000 UTC m=+1040.688212277" watchObservedRunningTime="2026-01-27 14:22:34.112957732 +0000 UTC m=+1040.697148746" Jan 27 14:22:35 crc kubenswrapper[4729]: I0127 14:22:35.044581 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:35 crc kubenswrapper[4729]: I0127 14:22:35.051108 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" Jan 27 14:22:35 crc kubenswrapper[4729]: I0127 14:22:35.053845 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-jf45j" Jan 27 14:22:44 crc kubenswrapper[4729]: I0127 14:22:44.106545 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" event={"ID":"c529bcb3-c119-47c9-8311-53d2c13f5ddb","Type":"ContainerStarted","Data":"9325ab498739eb447aeb0e62c56c3a92a643286e387726c65b602c4c6ac86099"} Jan 27 14:22:44 crc kubenswrapper[4729]: I0127 14:22:44.107349 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:22:44 crc kubenswrapper[4729]: I0127 14:22:44.132291 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" podStartSLOduration=-9223372004.722504 podStartE2EDuration="32.132272461s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:13.236820317 +0000 UTC m=+1019.821011321" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:22:44.126628423 +0000 UTC m=+1050.710819447" watchObservedRunningTime="2026-01-27 14:22:44.132272461 +0000 UTC m=+1050.716463475" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.138648 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"7f768b2c-e000-4052-9e92-82a3bde68514","Type":"ContainerStarted","Data":"7fc393c59ea130c06c78e87296bc022b042147b3de1418a8e3c2b2a0ac78b4e0"} Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.139408 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.140757 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"cb7f1542-ef3d-4033-9345-6c504620a57e","Type":"ContainerStarted","Data":"2cfae09945df3234f7820ef5c783d11d52c1f40f12fe3aee937dceadd2b0573d"} Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.141067 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.143275 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"cc44c481-9e30-42f7-883b-209184e04fba","Type":"ContainerStarted","Data":"31e1e9ceb888a57058568e4d1bd40625290a104cdad7718c044d935014799c05"} Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.143493 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.159344 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=-9223372002.695454 podStartE2EDuration="34.159322982s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:14.089096801 +0000 UTC m=+1020.673287805" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:22:46.157785049 +0000 UTC m=+1052.741976073" watchObservedRunningTime="2026-01-27 14:22:46.159322982 +0000 UTC m=+1052.743513986" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.188312 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=-9223372002.666485 podStartE2EDuration="34.188290922s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:14.317558827 +0000 UTC m=+1020.901749831" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:22:46.181937164 +0000 UTC m=+1052.766128168" watchObservedRunningTime="2026-01-27 14:22:46.188290922 +0000 UTC m=+1052.772481916" Jan 27 14:22:46 crc kubenswrapper[4729]: I0127 14:22:46.208123 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=-9223372002.646677 podStartE2EDuration="34.208099626s" podCreationTimestamp="2026-01-27 14:22:12 +0000 UTC" firstStartedPulling="2026-01-27 14:22:14.164463278 +0000 UTC m=+1020.748654282" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:22:46.20147489 +0000 UTC m=+1052.785665904" watchObservedRunningTime="2026-01-27 14:22:46.208099626 +0000 UTC m=+1052.792290630" Jan 27 14:22:52 crc kubenswrapper[4729]: I0127 14:22:52.488145 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-c62w8" Jan 27 14:22:52 crc kubenswrapper[4729]: I0127 14:22:52.899327 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-g8jsr" Jan 27 14:23:02 crc kubenswrapper[4729]: I0127 14:23:02.626257 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-jk5rc" Jan 27 14:23:03 crc kubenswrapper[4729]: I0127 14:23:03.619160 4729 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 14:23:03 crc kubenswrapper[4729]: I0127 14:23:03.619546 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:23:03 crc kubenswrapper[4729]: I0127 14:23:03.704394 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 27 14:23:03 crc kubenswrapper[4729]: I0127 14:23:03.910514 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 27 14:23:13 crc kubenswrapper[4729]: I0127 14:23:13.617192 4729 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 27 14:23:13 crc kubenswrapper[4729]: I0127 14:23:13.617844 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:23:22 crc kubenswrapper[4729]: I0127 14:23:22.654771 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:23:22 crc kubenswrapper[4729]: I0127 14:23:22.655434 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:23:23 crc kubenswrapper[4729]: I0127 14:23:23.617792 4729 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 14:23:23 crc kubenswrapper[4729]: I0127 14:23:23.618149 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:23:33 crc kubenswrapper[4729]: I0127 14:23:33.617470 4729 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 27 14:23:33 crc kubenswrapper[4729]: I0127 14:23:33.618034 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="7f768b2c-e000-4052-9e92-82a3bde68514" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 27 14:23:43 crc kubenswrapper[4729]: I0127 14:23:43.616756 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 27 14:23:52 crc kubenswrapper[4729]: I0127 14:23:52.655693 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:23:52 crc kubenswrapper[4729]: I0127 14:23:52.656328 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.945066 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-7kzp5"] Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.948782 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.954664 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.954708 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.955171 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-fjn5m" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.955946 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.956443 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.960846 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-7kzp5"] Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.967752 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.997946 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998011 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998184 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998308 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998347 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998407 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998524 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998585 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998752 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998923 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7tv\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:00 crc kubenswrapper[4729]: I0127 14:24:00.998965 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.027764 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-7kzp5"] Jan 27 14:24:01 crc kubenswrapper[4729]: E0127 14:24:01.028429 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-tl7tv metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-7kzp5" podUID="8a9757e0-ba8f-4460-9c3d-ad656dfd053a" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.123767 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.122902 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.124535 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.125113 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.125151 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.125178 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.125646 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126163 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126270 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126556 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126678 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7tv\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.126712 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.132092 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.132197 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.133152 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.133830 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.134635 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.136126 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.136994 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.145147 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.152035 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7tv\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv\") pod \"collector-7kzp5\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.639891 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.649987 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-7kzp5" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.734910 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735008 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735058 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735110 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735172 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735222 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735261 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735297 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735314 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735332 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735395 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl7tv\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv\") pod \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\" (UID: \"8a9757e0-ba8f-4460-9c3d-ad656dfd053a\") " Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735410 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735757 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config" (OuterVolumeSpecName: "config") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735840 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.735855 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.736350 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.736514 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.737062 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir" (OuterVolumeSpecName: "datadir") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.739006 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.739182 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token" (OuterVolumeSpecName: "collector-token") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.739346 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token" (OuterVolumeSpecName: "sa-token") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.740102 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp" (OuterVolumeSpecName: "tmp") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.740983 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics" (OuterVolumeSpecName: "metrics") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.743192 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv" (OuterVolumeSpecName: "kube-api-access-tl7tv") pod "8a9757e0-ba8f-4460-9c3d-ad656dfd053a" (UID: "8a9757e0-ba8f-4460-9c3d-ad656dfd053a"). InnerVolumeSpecName "kube-api-access-tl7tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837314 4729 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837367 4729 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837382 4729 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837401 4729 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-tmp\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837413 4729 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-collector-token\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837425 4729 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837435 4729 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837445 4729 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-datadir\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:01 crc kubenswrapper[4729]: I0127 14:24:01.837454 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl7tv\" (UniqueName: \"kubernetes.io/projected/8a9757e0-ba8f-4460-9c3d-ad656dfd053a-kube-api-access-tl7tv\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.647670 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-7kzp5" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.698193 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-7kzp5"] Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.703668 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-7kzp5"] Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.714024 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-p56mj"] Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.715020 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.717614 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.717916 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-fjn5m" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.718148 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.718443 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.718553 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.723842 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.731278 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-p56mj"] Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.851758 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.851838 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-sa-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.851910 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfx4s\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-kube-api-access-cfx4s\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.851933 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf88053-f822-4735-b8df-cfd1622aad97-tmp\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.851968 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6bf88053-f822-4735-b8df-cfd1622aad97-datadir\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852007 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-syslog-receiver\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852026 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852059 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-entrypoint\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852082 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config-openshift-service-cacrt\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852111 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-trusted-ca\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.852132 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-metrics\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-sa-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953367 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfx4s\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-kube-api-access-cfx4s\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953392 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf88053-f822-4735-b8df-cfd1622aad97-tmp\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953423 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6bf88053-f822-4735-b8df-cfd1622aad97-datadir\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953456 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-syslog-receiver\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953472 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953500 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-entrypoint\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953519 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config-openshift-service-cacrt\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953545 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-trusted-ca\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953561 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-metrics\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.953592 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.954088 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/6bf88053-f822-4735-b8df-cfd1622aad97-datadir\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.955065 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.955406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-trusted-ca\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.955639 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-entrypoint\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.955741 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/6bf88053-f822-4735-b8df-cfd1622aad97-config-openshift-service-cacrt\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.962867 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6bf88053-f822-4735-b8df-cfd1622aad97-tmp\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.963320 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-syslog-receiver\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.963441 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-metrics\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.964034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/6bf88053-f822-4735-b8df-cfd1622aad97-collector-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.974406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfx4s\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-kube-api-access-cfx4s\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:02 crc kubenswrapper[4729]: I0127 14:24:02.975124 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/6bf88053-f822-4735-b8df-cfd1622aad97-sa-token\") pod \"collector-p56mj\" (UID: \"6bf88053-f822-4735-b8df-cfd1622aad97\") " pod="openshift-logging/collector-p56mj" Jan 27 14:24:03 crc kubenswrapper[4729]: I0127 14:24:03.031646 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-p56mj" Jan 27 14:24:03 crc kubenswrapper[4729]: I0127 14:24:03.450887 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-p56mj"] Jan 27 14:24:03 crc kubenswrapper[4729]: I0127 14:24:03.459930 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:24:03 crc kubenswrapper[4729]: I0127 14:24:03.658301 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-p56mj" event={"ID":"6bf88053-f822-4735-b8df-cfd1622aad97","Type":"ContainerStarted","Data":"af3255a728897a1546c81d4cc4d8830e468cffc62f6986db3af4bdbf302cefa6"} Jan 27 14:24:04 crc kubenswrapper[4729]: I0127 14:24:04.062448 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9757e0-ba8f-4460-9c3d-ad656dfd053a" path="/var/lib/kubelet/pods/8a9757e0-ba8f-4460-9c3d-ad656dfd053a/volumes" Jan 27 14:24:13 crc kubenswrapper[4729]: I0127 14:24:13.729819 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-p56mj" event={"ID":"6bf88053-f822-4735-b8df-cfd1622aad97","Type":"ContainerStarted","Data":"cb80daae79cb1346f523e07fb5b1022efc7285171dc9a7959da514b0c1ee1306"} Jan 27 14:24:13 crc kubenswrapper[4729]: I0127 14:24:13.754072 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-p56mj" podStartSLOduration=2.233532861 podStartE2EDuration="11.754034988s" podCreationTimestamp="2026-01-27 14:24:02 +0000 UTC" firstStartedPulling="2026-01-27 14:24:03.459659823 +0000 UTC m=+1130.043850827" lastFinishedPulling="2026-01-27 14:24:12.98016195 +0000 UTC m=+1139.564352954" observedRunningTime="2026-01-27 14:24:13.749900027 +0000 UTC m=+1140.334091031" watchObservedRunningTime="2026-01-27 14:24:13.754034988 +0000 UTC m=+1140.338226002" Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.655117 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.655711 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.655780 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.656810 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.656940 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65" gracePeriod=600 Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.802546 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65" exitCode=0 Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.802599 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65"} Jan 27 14:24:22 crc kubenswrapper[4729]: I0127 14:24:22.802665 4729 scope.go:117] "RemoveContainer" containerID="031b86fdf1c8588d1c8685212fd5e4f48ddec95b4eeebd667ab12b9388977fde" Jan 27 14:24:23 crc kubenswrapper[4729]: I0127 14:24:23.812495 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe"} Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.190203 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5"] Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.194352 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.199247 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.207213 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5"] Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.272284 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.272349 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.272561 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgd4\" (UniqueName: \"kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.374141 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgd4\" (UniqueName: \"kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.374544 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.374692 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.375291 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.375303 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.412428 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgd4\" (UniqueName: \"kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.525292 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.960106 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5"] Jan 27 14:24:44 crc kubenswrapper[4729]: I0127 14:24:44.994356 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" event={"ID":"293cacf8-dc6b-4065-b80c-5020312fc92a","Type":"ContainerStarted","Data":"ce86296441439835e70719b36960085eef8852504093255c593e88a32cd71ea1"} Jan 27 14:24:46 crc kubenswrapper[4729]: I0127 14:24:46.001854 4729 generic.go:334] "Generic (PLEG): container finished" podID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerID="4b93290416e8297850c40eadad06dd78c15b61018c3776d15543caf044f6db73" exitCode=0 Jan 27 14:24:46 crc kubenswrapper[4729]: I0127 14:24:46.001918 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" event={"ID":"293cacf8-dc6b-4065-b80c-5020312fc92a","Type":"ContainerDied","Data":"4b93290416e8297850c40eadad06dd78c15b61018c3776d15543caf044f6db73"} Jan 27 14:24:52 crc kubenswrapper[4729]: I0127 14:24:52.044614 4729 generic.go:334] "Generic (PLEG): container finished" podID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerID="b4f1d6f35a7de4fd159dfe64eafa9a306bc8f7fcfb5d2b6cf1d6c888e61608eb" exitCode=0 Jan 27 14:24:52 crc kubenswrapper[4729]: I0127 14:24:52.045107 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" event={"ID":"293cacf8-dc6b-4065-b80c-5020312fc92a","Type":"ContainerDied","Data":"b4f1d6f35a7de4fd159dfe64eafa9a306bc8f7fcfb5d2b6cf1d6c888e61608eb"} Jan 27 14:24:53 crc kubenswrapper[4729]: I0127 14:24:53.062463 4729 generic.go:334] "Generic (PLEG): container finished" podID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerID="66b8e767d35d409d29f034ca5ad6f1f72715ad9f32f2753814c1326f9cd61b78" exitCode=0 Jan 27 14:24:53 crc kubenswrapper[4729]: I0127 14:24:53.063237 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" event={"ID":"293cacf8-dc6b-4065-b80c-5020312fc92a","Type":"ContainerDied","Data":"66b8e767d35d409d29f034ca5ad6f1f72715ad9f32f2753814c1326f9cd61b78"} Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.366555 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.432738 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfgd4\" (UniqueName: \"kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4\") pod \"293cacf8-dc6b-4065-b80c-5020312fc92a\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.432855 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle\") pod \"293cacf8-dc6b-4065-b80c-5020312fc92a\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.432896 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util\") pod \"293cacf8-dc6b-4065-b80c-5020312fc92a\" (UID: \"293cacf8-dc6b-4065-b80c-5020312fc92a\") " Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.433657 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle" (OuterVolumeSpecName: "bundle") pod "293cacf8-dc6b-4065-b80c-5020312fc92a" (UID: "293cacf8-dc6b-4065-b80c-5020312fc92a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.440115 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4" (OuterVolumeSpecName: "kube-api-access-rfgd4") pod "293cacf8-dc6b-4065-b80c-5020312fc92a" (UID: "293cacf8-dc6b-4065-b80c-5020312fc92a"). InnerVolumeSpecName "kube-api-access-rfgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.449482 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util" (OuterVolumeSpecName: "util") pod "293cacf8-dc6b-4065-b80c-5020312fc92a" (UID: "293cacf8-dc6b-4065-b80c-5020312fc92a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.535051 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfgd4\" (UniqueName: \"kubernetes.io/projected/293cacf8-dc6b-4065-b80c-5020312fc92a-kube-api-access-rfgd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.535104 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:54 crc kubenswrapper[4729]: I0127 14:24:54.535116 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/293cacf8-dc6b-4065-b80c-5020312fc92a-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:24:55 crc kubenswrapper[4729]: I0127 14:24:55.080065 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" event={"ID":"293cacf8-dc6b-4065-b80c-5020312fc92a","Type":"ContainerDied","Data":"ce86296441439835e70719b36960085eef8852504093255c593e88a32cd71ea1"} Jan 27 14:24:55 crc kubenswrapper[4729]: I0127 14:24:55.080111 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce86296441439835e70719b36960085eef8852504093255c593e88a32cd71ea1" Jan 27 14:24:55 crc kubenswrapper[4729]: I0127 14:24:55.080136 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.875222 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pg62x"] Jan 27 14:25:00 crc kubenswrapper[4729]: E0127 14:25:00.876183 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="extract" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.876203 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="extract" Jan 27 14:25:00 crc kubenswrapper[4729]: E0127 14:25:00.876231 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="util" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.876239 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="util" Jan 27 14:25:00 crc kubenswrapper[4729]: E0127 14:25:00.876250 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="pull" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.876281 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="pull" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.876436 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="293cacf8-dc6b-4065-b80c-5020312fc92a" containerName="extract" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.877153 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.880632 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dvg7m" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.881037 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.881945 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.883137 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pg62x"] Jan 27 14:25:00 crc kubenswrapper[4729]: I0127 14:25:00.934106 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5wc\" (UniqueName: \"kubernetes.io/projected/5bbadca7-3b66-4264-8217-5f246163b41e-kube-api-access-wx5wc\") pod \"nmstate-operator-646758c888-pg62x\" (UID: \"5bbadca7-3b66-4264-8217-5f246163b41e\") " pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" Jan 27 14:25:01 crc kubenswrapper[4729]: I0127 14:25:01.037204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5wc\" (UniqueName: \"kubernetes.io/projected/5bbadca7-3b66-4264-8217-5f246163b41e-kube-api-access-wx5wc\") pod \"nmstate-operator-646758c888-pg62x\" (UID: \"5bbadca7-3b66-4264-8217-5f246163b41e\") " pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" Jan 27 14:25:01 crc kubenswrapper[4729]: I0127 14:25:01.060280 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5wc\" (UniqueName: \"kubernetes.io/projected/5bbadca7-3b66-4264-8217-5f246163b41e-kube-api-access-wx5wc\") pod \"nmstate-operator-646758c888-pg62x\" (UID: \"5bbadca7-3b66-4264-8217-5f246163b41e\") " pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" Jan 27 14:25:01 crc kubenswrapper[4729]: I0127 14:25:01.199915 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" Jan 27 14:25:01 crc kubenswrapper[4729]: I0127 14:25:01.696340 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-pg62x"] Jan 27 14:25:02 crc kubenswrapper[4729]: I0127 14:25:02.127722 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" event={"ID":"5bbadca7-3b66-4264-8217-5f246163b41e","Type":"ContainerStarted","Data":"5aea602b84f32b96e4a07431ea7a2200d57a6d2cbe3017adb71cfca1653da33c"} Jan 27 14:25:05 crc kubenswrapper[4729]: I0127 14:25:05.149943 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" event={"ID":"5bbadca7-3b66-4264-8217-5f246163b41e","Type":"ContainerStarted","Data":"953e74121a61e1be0d42722f51c4e9a67f3fbd7f1c785e59eadf8b6d53e10073"} Jan 27 14:25:05 crc kubenswrapper[4729]: I0127 14:25:05.180188 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-pg62x" podStartSLOduration=2.642803818 podStartE2EDuration="5.1801722s" podCreationTimestamp="2026-01-27 14:25:00 +0000 UTC" firstStartedPulling="2026-01-27 14:25:01.693536418 +0000 UTC m=+1188.277727422" lastFinishedPulling="2026-01-27 14:25:04.2309048 +0000 UTC m=+1190.815095804" observedRunningTime="2026-01-27 14:25:05.174714514 +0000 UTC m=+1191.758905528" watchObservedRunningTime="2026-01-27 14:25:05.1801722 +0000 UTC m=+1191.764363194" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.132929 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7cmw2"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.134777 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.136865 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vwmhv" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.147568 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7cmw2"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.154954 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.156189 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.161003 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.180266 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.190780 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bbqst"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.191854 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.216976 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-ovs-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217093 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217132 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xld7m\" (UniqueName: \"kubernetes.io/projected/15d3bff1-3e83-4e70-8118-ca0163c18e48-kube-api-access-xld7m\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217260 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-dbus-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217309 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvjx\" (UniqueName: \"kubernetes.io/projected/9ad69054-310c-4725-8991-d6bd0ace768d-kube-api-access-lqvjx\") pod \"nmstate-metrics-54757c584b-7cmw2\" (UID: \"9ad69054-310c-4725-8991-d6bd0ace768d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217327 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7vjt\" (UniqueName: \"kubernetes.io/projected/7b15636d-99af-4c7d-80a8-b179de0709d7-kube-api-access-t7vjt\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.217362 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-nmstate-lock\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.284981 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.285972 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.301492 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.301703 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.301960 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-bjk67" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318507 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqvjx\" (UniqueName: \"kubernetes.io/projected/9ad69054-310c-4725-8991-d6bd0ace768d-kube-api-access-lqvjx\") pod \"nmstate-metrics-54757c584b-7cmw2\" (UID: \"9ad69054-310c-4725-8991-d6bd0ace768d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318565 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7vjt\" (UniqueName: \"kubernetes.io/projected/7b15636d-99af-4c7d-80a8-b179de0709d7-kube-api-access-t7vjt\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318600 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8wg\" (UniqueName: \"kubernetes.io/projected/08c43db0-20ec-4a56-bd40-718173782b7b-kube-api-access-sn8wg\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318768 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-nmstate-lock\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-ovs-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318892 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-nmstate-lock\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318956 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-ovs-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.318959 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xld7m\" (UniqueName: \"kubernetes.io/projected/15d3bff1-3e83-4e70-8118-ca0163c18e48-kube-api-access-xld7m\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: E0127 14:25:06.319004 4729 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 27 14:25:06 crc kubenswrapper[4729]: E0127 14:25:06.319078 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair podName:7b15636d-99af-4c7d-80a8-b179de0709d7 nodeName:}" failed. No retries permitted until 2026-01-27 14:25:06.819055901 +0000 UTC m=+1193.403246905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-t7h46" (UID: "7b15636d-99af-4c7d-80a8-b179de0709d7") : secret "openshift-nmstate-webhook" not found Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.319143 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-dbus-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.319201 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.319240 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/08c43db0-20ec-4a56-bd40-718173782b7b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.319462 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/15d3bff1-3e83-4e70-8118-ca0163c18e48-dbus-socket\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.343674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqvjx\" (UniqueName: \"kubernetes.io/projected/9ad69054-310c-4725-8991-d6bd0ace768d-kube-api-access-lqvjx\") pod \"nmstate-metrics-54757c584b-7cmw2\" (UID: \"9ad69054-310c-4725-8991-d6bd0ace768d\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.353131 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.370674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7vjt\" (UniqueName: \"kubernetes.io/projected/7b15636d-99af-4c7d-80a8-b179de0709d7-kube-api-access-t7vjt\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.389415 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xld7m\" (UniqueName: \"kubernetes.io/projected/15d3bff1-3e83-4e70-8118-ca0163c18e48-kube-api-access-xld7m\") pod \"nmstate-handler-bbqst\" (UID: \"15d3bff1-3e83-4e70-8118-ca0163c18e48\") " pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.420782 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.420839 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/08c43db0-20ec-4a56-bd40-718173782b7b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.420894 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8wg\" (UniqueName: \"kubernetes.io/projected/08c43db0-20ec-4a56-bd40-718173782b7b-kube-api-access-sn8wg\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: E0127 14:25:06.421176 4729 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 27 14:25:06 crc kubenswrapper[4729]: E0127 14:25:06.421551 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert podName:08c43db0-20ec-4a56-bd40-718173782b7b nodeName:}" failed. No retries permitted until 2026-01-27 14:25:06.921531871 +0000 UTC m=+1193.505722875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-hvk8w" (UID: "08c43db0-20ec-4a56-bd40-718173782b7b") : secret "plugin-serving-cert" not found Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.422544 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/08c43db0-20ec-4a56-bd40-718173782b7b-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.448574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8wg\" (UniqueName: \"kubernetes.io/projected/08c43db0-20ec-4a56-bd40-718173782b7b-kube-api-access-sn8wg\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.464288 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.496963 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.498161 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.516829 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.520370 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521363 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521407 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521475 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521492 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4drhr\" (UniqueName: \"kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521516 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521552 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.521596 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.629707 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630472 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630503 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4drhr\" (UniqueName: \"kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630536 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630596 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630666 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.630687 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.631631 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.631931 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.632367 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.632774 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.636127 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.637034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.666785 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4drhr\" (UniqueName: \"kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr\") pod \"console-779df67894-h7t9z\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.833407 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.837511 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b15636d-99af-4c7d-80a8-b179de0709d7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-t7h46\" (UID: \"7b15636d-99af-4c7d-80a8-b179de0709d7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.846529 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.934932 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.940087 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/08c43db0-20ec-4a56-bd40-718173782b7b-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-hvk8w\" (UID: \"08c43db0-20ec-4a56-bd40-718173782b7b\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:06 crc kubenswrapper[4729]: I0127 14:25:06.951584 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-7cmw2"] Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.074498 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.174654 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bbqst" event={"ID":"15d3bff1-3e83-4e70-8118-ca0163c18e48","Type":"ContainerStarted","Data":"5169f6fc1babe8e8a241baf0ba0a144d3ad530e2e4f26a4a1a4b0565037f76f9"} Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.176732 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" event={"ID":"9ad69054-310c-4725-8991-d6bd0ace768d","Type":"ContainerStarted","Data":"8ae68f89eb5c20350d215ff3bd319c669bcbb5c38f9ad76e7095c6521060045f"} Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.223029 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.291184 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:25:07 crc kubenswrapper[4729]: W0127 14:25:07.299555 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf1e407_1d97_4ef8_be0a_690eb2763b1d.slice/crio-0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c WatchSource:0}: Error finding container 0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c: Status 404 returned error can't find the container with id 0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.475215 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46"] Jan 27 14:25:07 crc kubenswrapper[4729]: I0127 14:25:07.680923 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w"] Jan 27 14:25:08 crc kubenswrapper[4729]: I0127 14:25:08.187274 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" event={"ID":"08c43db0-20ec-4a56-bd40-718173782b7b","Type":"ContainerStarted","Data":"c07aded0076e035918f9977182a528bf225a3db87429f0b49d5a71ba294c572e"} Jan 27 14:25:08 crc kubenswrapper[4729]: I0127 14:25:08.189371 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" event={"ID":"7b15636d-99af-4c7d-80a8-b179de0709d7","Type":"ContainerStarted","Data":"e5c62e6e9d07a6554ac43a277d969c833af76f89bcf44547af9cc1f015d4a777"} Jan 27 14:25:08 crc kubenswrapper[4729]: I0127 14:25:08.191045 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-779df67894-h7t9z" event={"ID":"adf1e407-1d97-4ef8-be0a-690eb2763b1d","Type":"ContainerStarted","Data":"7839d7cbe95928236a0f448e10b3de55844205d847b17d72425d950f3ead868e"} Jan 27 14:25:08 crc kubenswrapper[4729]: I0127 14:25:08.191071 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-779df67894-h7t9z" event={"ID":"adf1e407-1d97-4ef8-be0a-690eb2763b1d","Type":"ContainerStarted","Data":"0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c"} Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.214334 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bbqst" event={"ID":"15d3bff1-3e83-4e70-8118-ca0163c18e48","Type":"ContainerStarted","Data":"f6574880f67436fcfc6efeeec3f7833259c4e107ee1dfe789e496bc8cef58174"} Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.215015 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.231242 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" event={"ID":"9ad69054-310c-4725-8991-d6bd0ace768d","Type":"ContainerStarted","Data":"b1c683c085175199503b8caff11e3f7567a6a10aa0fb8ef05a921decfb8949c9"} Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.234292 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" event={"ID":"7b15636d-99af-4c7d-80a8-b179de0709d7","Type":"ContainerStarted","Data":"c2380d130573f9dab1b4a23074f30d5d635f3ce366949f0e94aa72abe3e5febf"} Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.234490 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.249949 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-779df67894-h7t9z" podStartSLOduration=4.249926872 podStartE2EDuration="4.249926872s" podCreationTimestamp="2026-01-27 14:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:25:08.210623931 +0000 UTC m=+1194.794814935" watchObservedRunningTime="2026-01-27 14:25:10.249926872 +0000 UTC m=+1196.834117896" Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.255484 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bbqst" podStartSLOduration=1.537626911 podStartE2EDuration="4.25545722s" podCreationTimestamp="2026-01-27 14:25:06 +0000 UTC" firstStartedPulling="2026-01-27 14:25:06.576964688 +0000 UTC m=+1193.161155692" lastFinishedPulling="2026-01-27 14:25:09.294794987 +0000 UTC m=+1195.878986001" observedRunningTime="2026-01-27 14:25:10.244409324 +0000 UTC m=+1196.828600358" watchObservedRunningTime="2026-01-27 14:25:10.25545722 +0000 UTC m=+1196.839648224" Jan 27 14:25:10 crc kubenswrapper[4729]: I0127 14:25:10.265368 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" podStartSLOduration=2.462158237 podStartE2EDuration="4.265347934s" podCreationTimestamp="2026-01-27 14:25:06 +0000 UTC" firstStartedPulling="2026-01-27 14:25:07.494854707 +0000 UTC m=+1194.079045711" lastFinishedPulling="2026-01-27 14:25:09.298044404 +0000 UTC m=+1195.882235408" observedRunningTime="2026-01-27 14:25:10.263487965 +0000 UTC m=+1196.847678989" watchObservedRunningTime="2026-01-27 14:25:10.265347934 +0000 UTC m=+1196.849538958" Jan 27 14:25:11 crc kubenswrapper[4729]: I0127 14:25:11.245051 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" event={"ID":"08c43db0-20ec-4a56-bd40-718173782b7b","Type":"ContainerStarted","Data":"23b4b1c5a2d59ba2d1197be2743a322912d991ab78f15ab21d94bd5f7c435e6c"} Jan 27 14:25:11 crc kubenswrapper[4729]: I0127 14:25:11.259439 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-hvk8w" podStartSLOduration=2.337912535 podStartE2EDuration="5.259420731s" podCreationTimestamp="2026-01-27 14:25:06 +0000 UTC" firstStartedPulling="2026-01-27 14:25:07.693141141 +0000 UTC m=+1194.277332165" lastFinishedPulling="2026-01-27 14:25:10.614649357 +0000 UTC m=+1197.198840361" observedRunningTime="2026-01-27 14:25:11.258974139 +0000 UTC m=+1197.843165153" watchObservedRunningTime="2026-01-27 14:25:11.259420731 +0000 UTC m=+1197.843611735" Jan 27 14:25:13 crc kubenswrapper[4729]: I0127 14:25:13.264976 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" event={"ID":"9ad69054-310c-4725-8991-d6bd0ace768d","Type":"ContainerStarted","Data":"46c56fa5e10899b18e7978c763a02108ff86438894da0938224f4fdbf2c023aa"} Jan 27 14:25:13 crc kubenswrapper[4729]: I0127 14:25:13.286704 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-7cmw2" podStartSLOduration=1.549335846 podStartE2EDuration="7.286675621s" podCreationTimestamp="2026-01-27 14:25:06 +0000 UTC" firstStartedPulling="2026-01-27 14:25:06.965937442 +0000 UTC m=+1193.550128446" lastFinishedPulling="2026-01-27 14:25:12.703277217 +0000 UTC m=+1199.287468221" observedRunningTime="2026-01-27 14:25:13.282400556 +0000 UTC m=+1199.866591570" watchObservedRunningTime="2026-01-27 14:25:13.286675621 +0000 UTC m=+1199.870866635" Jan 27 14:25:16 crc kubenswrapper[4729]: I0127 14:25:16.542947 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bbqst" Jan 27 14:25:16 crc kubenswrapper[4729]: I0127 14:25:16.847251 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:16 crc kubenswrapper[4729]: I0127 14:25:16.847771 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:16 crc kubenswrapper[4729]: I0127 14:25:16.853838 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:17 crc kubenswrapper[4729]: I0127 14:25:17.300504 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:25:17 crc kubenswrapper[4729]: I0127 14:25:17.360583 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:25:27 crc kubenswrapper[4729]: I0127 14:25:27.082898 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-t7h46" Jan 27 14:25:42 crc kubenswrapper[4729]: I0127 14:25:42.416340 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-d975867cb-plmkl" podUID="d8505b6a-3041-4b35-a246-82135f37a1bc" containerName="console" containerID="cri-o://f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74" gracePeriod=15 Jan 27 14:25:42 crc kubenswrapper[4729]: I0127 14:25:42.922083 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d975867cb-plmkl_d8505b6a-3041-4b35-a246-82135f37a1bc/console/0.log" Jan 27 14:25:42 crc kubenswrapper[4729]: I0127 14:25:42.922491 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017409 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017469 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017525 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wccd5\" (UniqueName: \"kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017596 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017654 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017687 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.017775 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle\") pod \"d8505b6a-3041-4b35-a246-82135f37a1bc\" (UID: \"d8505b6a-3041-4b35-a246-82135f37a1bc\") " Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.021024 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config" (OuterVolumeSpecName: "console-config") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.021306 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.021362 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.021383 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca" (OuterVolumeSpecName: "service-ca") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.023966 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5" (OuterVolumeSpecName: "kube-api-access-wccd5") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "kube-api-access-wccd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.025044 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.025068 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d8505b6a-3041-4b35-a246-82135f37a1bc" (UID: "d8505b6a-3041-4b35-a246-82135f37a1bc"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.119918 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.119967 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.119983 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.119996 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.120007 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d8505b6a-3041-4b35-a246-82135f37a1bc-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.120018 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wccd5\" (UniqueName: \"kubernetes.io/projected/d8505b6a-3041-4b35-a246-82135f37a1bc-kube-api-access-wccd5\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.120029 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d8505b6a-3041-4b35-a246-82135f37a1bc-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502609 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-d975867cb-plmkl_d8505b6a-3041-4b35-a246-82135f37a1bc/console/0.log" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502677 4729 generic.go:334] "Generic (PLEG): container finished" podID="d8505b6a-3041-4b35-a246-82135f37a1bc" containerID="f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74" exitCode=2 Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502718 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d975867cb-plmkl" event={"ID":"d8505b6a-3041-4b35-a246-82135f37a1bc","Type":"ContainerDied","Data":"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74"} Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502754 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d975867cb-plmkl" event={"ID":"d8505b6a-3041-4b35-a246-82135f37a1bc","Type":"ContainerDied","Data":"4db78b122c5e16ef25411ea952b919976572d8a83e6d0dca3ff56efb9d20409a"} Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502763 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d975867cb-plmkl" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.502773 4729 scope.go:117] "RemoveContainer" containerID="f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.534905 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.540005 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-d975867cb-plmkl"] Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.542825 4729 scope.go:117] "RemoveContainer" containerID="f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74" Jan 27 14:25:43 crc kubenswrapper[4729]: E0127 14:25:43.543568 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74\": container with ID starting with f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74 not found: ID does not exist" containerID="f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.543615 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74"} err="failed to get container status \"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74\": rpc error: code = NotFound desc = could not find container \"f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74\": container with ID starting with f89d6bfef81f48c33304ce9571dc2e8c411447b26938418bdb67aca02fe9ba74 not found: ID does not exist" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.582755 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx"] Jan 27 14:25:43 crc kubenswrapper[4729]: E0127 14:25:43.583125 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8505b6a-3041-4b35-a246-82135f37a1bc" containerName="console" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.583149 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8505b6a-3041-4b35-a246-82135f37a1bc" containerName="console" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.583376 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8505b6a-3041-4b35-a246-82135f37a1bc" containerName="console" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.584707 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.587304 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.592431 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx"] Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.628857 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.628969 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h28j9\" (UniqueName: \"kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.629077 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.730146 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.730216 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h28j9\" (UniqueName: \"kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.730345 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.730751 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.730992 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.752850 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h28j9\" (UniqueName: \"kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:43 crc kubenswrapper[4729]: I0127 14:25:43.900941 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:44 crc kubenswrapper[4729]: I0127 14:25:44.061385 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8505b6a-3041-4b35-a246-82135f37a1bc" path="/var/lib/kubelet/pods/d8505b6a-3041-4b35-a246-82135f37a1bc/volumes" Jan 27 14:25:44 crc kubenswrapper[4729]: I0127 14:25:44.342754 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx"] Jan 27 14:25:44 crc kubenswrapper[4729]: I0127 14:25:44.513418 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" event={"ID":"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4","Type":"ContainerStarted","Data":"597c75cade0d7722aeac304dd4dec22ce4df8554643c73b1bbff75aa1c0be1a9"} Jan 27 14:25:45 crc kubenswrapper[4729]: I0127 14:25:45.522423 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerID="355ba0fda1cdec03f92b62657bedb19d2feddc7268fcdd426f8fbd7d1cfca4d0" exitCode=0 Jan 27 14:25:45 crc kubenswrapper[4729]: I0127 14:25:45.522480 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" event={"ID":"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4","Type":"ContainerDied","Data":"355ba0fda1cdec03f92b62657bedb19d2feddc7268fcdd426f8fbd7d1cfca4d0"} Jan 27 14:25:47 crc kubenswrapper[4729]: I0127 14:25:47.542001 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerID="4f177e0aa141fd4e5e71d22048b2046086a455a03651219fe7c56cb0d0df9e29" exitCode=0 Jan 27 14:25:47 crc kubenswrapper[4729]: I0127 14:25:47.542164 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" event={"ID":"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4","Type":"ContainerDied","Data":"4f177e0aa141fd4e5e71d22048b2046086a455a03651219fe7c56cb0d0df9e29"} Jan 27 14:25:48 crc kubenswrapper[4729]: I0127 14:25:48.552989 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerID="cbf2928caa64ace9bcdaa6fb1a6e81a1b6fc63c742a27ebf6b763841a52c1dac" exitCode=0 Jan 27 14:25:48 crc kubenswrapper[4729]: I0127 14:25:48.553064 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" event={"ID":"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4","Type":"ContainerDied","Data":"cbf2928caa64ace9bcdaa6fb1a6e81a1b6fc63c742a27ebf6b763841a52c1dac"} Jan 27 14:25:49 crc kubenswrapper[4729]: I0127 14:25:49.913663 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.034575 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle\") pod \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.034664 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h28j9\" (UniqueName: \"kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9\") pod \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.034840 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util\") pod \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\" (UID: \"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4\") " Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.035977 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle" (OuterVolumeSpecName: "bundle") pod "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" (UID: "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.041151 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9" (OuterVolumeSpecName: "kube-api-access-h28j9") pod "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" (UID: "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4"). InnerVolumeSpecName "kube-api-access-h28j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.137758 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.137804 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h28j9\" (UniqueName: \"kubernetes.io/projected/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-kube-api-access-h28j9\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.145834 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util" (OuterVolumeSpecName: "util") pod "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" (UID: "6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.239755 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.570080 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" event={"ID":"6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4","Type":"ContainerDied","Data":"597c75cade0d7722aeac304dd4dec22ce4df8554643c73b1bbff75aa1c0be1a9"} Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.570126 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx" Jan 27 14:25:50 crc kubenswrapper[4729]: I0127 14:25:50.570136 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="597c75cade0d7722aeac304dd4dec22ce4df8554643c73b1bbff75aa1c0be1a9" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.590044 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn"] Jan 27 14:25:59 crc kubenswrapper[4729]: E0127 14:25:59.591813 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="util" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.591949 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="util" Jan 27 14:25:59 crc kubenswrapper[4729]: E0127 14:25:59.592018 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="extract" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.592077 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="extract" Jan 27 14:25:59 crc kubenswrapper[4729]: E0127 14:25:59.592130 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="pull" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.592366 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="pull" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.592552 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4" containerName="extract" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.593144 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.597162 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.597346 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.597600 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.600845 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nsshb" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.600858 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.627690 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn"] Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.697312 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlxh\" (UniqueName: \"kubernetes.io/projected/816e93a1-24ab-4c0b-acd4-439e95ae655d-kube-api-access-qhlxh\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.697394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-apiservice-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.697424 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-webhook-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.799031 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-webhook-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.799136 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlxh\" (UniqueName: \"kubernetes.io/projected/816e93a1-24ab-4c0b-acd4-439e95ae655d-kube-api-access-qhlxh\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.799194 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-apiservice-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.806392 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-apiservice-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.806684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/816e93a1-24ab-4c0b-acd4-439e95ae655d-webhook-cert\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.825558 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlxh\" (UniqueName: \"kubernetes.io/projected/816e93a1-24ab-4c0b-acd4-439e95ae655d-kube-api-access-qhlxh\") pod \"metallb-operator-controller-manager-58cc84db45-5jsdn\" (UID: \"816e93a1-24ab-4c0b-acd4-439e95ae655d\") " pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.912466 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.938852 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46645688-26b9b"] Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.944110 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.946443 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.946791 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.946793 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9w5lz" Jan 27 14:25:59 crc kubenswrapper[4729]: I0127 14:25:59.967960 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46645688-26b9b"] Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.004375 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9k22\" (UniqueName: \"kubernetes.io/projected/07916c16-27c4-4035-855c-f5ca61af09df-kube-api-access-m9k22\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.004683 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-webhook-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.004787 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.106917 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-webhook-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.107406 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.107585 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9k22\" (UniqueName: \"kubernetes.io/projected/07916c16-27c4-4035-855c-f5ca61af09df-kube-api-access-m9k22\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.125005 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-webhook-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.132383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07916c16-27c4-4035-855c-f5ca61af09df-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.133655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9k22\" (UniqueName: \"kubernetes.io/projected/07916c16-27c4-4035-855c-f5ca61af09df-kube-api-access-m9k22\") pod \"metallb-operator-webhook-server-7b46645688-26b9b\" (UID: \"07916c16-27c4-4035-855c-f5ca61af09df\") " pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.324471 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.474309 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn"] Jan 27 14:26:00 crc kubenswrapper[4729]: W0127 14:26:00.481370 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod816e93a1_24ab_4c0b_acd4_439e95ae655d.slice/crio-e27d6171cc31b7c165770831e520431deda10610837facb31b630a7ee354c973 WatchSource:0}: Error finding container e27d6171cc31b7c165770831e520431deda10610837facb31b630a7ee354c973: Status 404 returned error can't find the container with id e27d6171cc31b7c165770831e520431deda10610837facb31b630a7ee354c973 Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.640373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" event={"ID":"816e93a1-24ab-4c0b-acd4-439e95ae655d","Type":"ContainerStarted","Data":"e27d6171cc31b7c165770831e520431deda10610837facb31b630a7ee354c973"} Jan 27 14:26:00 crc kubenswrapper[4729]: I0127 14:26:00.780110 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46645688-26b9b"] Jan 27 14:26:00 crc kubenswrapper[4729]: W0127 14:26:00.782480 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07916c16_27c4_4035_855c_f5ca61af09df.slice/crio-b08df2cb34484f37102d085f13fea69187809d8f849c58849328402e6cc124a3 WatchSource:0}: Error finding container b08df2cb34484f37102d085f13fea69187809d8f849c58849328402e6cc124a3: Status 404 returned error can't find the container with id b08df2cb34484f37102d085f13fea69187809d8f849c58849328402e6cc124a3 Jan 27 14:26:01 crc kubenswrapper[4729]: I0127 14:26:01.647747 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" event={"ID":"07916c16-27c4-4035-855c-f5ca61af09df","Type":"ContainerStarted","Data":"b08df2cb34484f37102d085f13fea69187809d8f849c58849328402e6cc124a3"} Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.711159 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" event={"ID":"816e93a1-24ab-4c0b-acd4-439e95ae655d","Type":"ContainerStarted","Data":"ee9f9a2fbe11af95dde8b535f1f88928af33f199f48d83c7cfaf751cdf670d46"} Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.711738 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.712735 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" event={"ID":"07916c16-27c4-4035-855c-f5ca61af09df","Type":"ContainerStarted","Data":"00f2432346672dfb2959b9eb2f91ad72d28f23be9e3515e11c2acd2bb6aa8144"} Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.712914 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.730028 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" podStartSLOduration=2.302183434 podStartE2EDuration="9.730010385s" podCreationTimestamp="2026-01-27 14:25:59 +0000 UTC" firstStartedPulling="2026-01-27 14:26:00.483862502 +0000 UTC m=+1247.068053506" lastFinishedPulling="2026-01-27 14:26:07.911689453 +0000 UTC m=+1254.495880457" observedRunningTime="2026-01-27 14:26:08.726826738 +0000 UTC m=+1255.311017742" watchObservedRunningTime="2026-01-27 14:26:08.730010385 +0000 UTC m=+1255.314201389" Jan 27 14:26:08 crc kubenswrapper[4729]: I0127 14:26:08.749300 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" podStartSLOduration=2.593079613 podStartE2EDuration="9.749281944s" podCreationTimestamp="2026-01-27 14:25:59 +0000 UTC" firstStartedPulling="2026-01-27 14:26:00.785226317 +0000 UTC m=+1247.369417321" lastFinishedPulling="2026-01-27 14:26:07.941428608 +0000 UTC m=+1254.525619652" observedRunningTime="2026-01-27 14:26:08.743800734 +0000 UTC m=+1255.327991728" watchObservedRunningTime="2026-01-27 14:26:08.749281944 +0000 UTC m=+1255.333472948" Jan 27 14:26:20 crc kubenswrapper[4729]: I0127 14:26:20.330106 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b46645688-26b9b" Jan 27 14:26:22 crc kubenswrapper[4729]: I0127 14:26:22.654850 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:26:22 crc kubenswrapper[4729]: I0127 14:26:22.656069 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:26:39 crc kubenswrapper[4729]: I0127 14:26:39.915274 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58cc84db45-5jsdn" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.772191 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ml79v"] Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.775685 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.779769 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.779860 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.779779 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rq9cd" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.787825 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd"] Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.789489 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.797815 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.798609 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd"] Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.886262 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-k2h86"] Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.887894 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k2h86" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.893539 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.893822 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.894092 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jdm7z" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.894234 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898549 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0015219d-ee39-40ea-896a-944b4b45e46b-frr-startup\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898616 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmgn\" (UniqueName: \"kubernetes.io/projected/cdef1951-7494-4265-9e1c-9098dab9c112-kube-api-access-mzmgn\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898655 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898701 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-metrics\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898739 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-reloader\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898790 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdef1951-7494-4265-9e1c-9098dab9c112-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898814 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldsvt\" (UniqueName: \"kubernetes.io/projected/0015219d-ee39-40ea-896a-944b4b45e46b-kube-api-access-ldsvt\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898840 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-conf\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.898943 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-sockets\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.914041 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-wdllv"] Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.915600 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.921266 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 14:26:40 crc kubenswrapper[4729]: I0127 14:26:40.948650 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wdllv"] Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.001483 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metallb-excludel2\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.001549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-conf\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002036 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-conf\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002340 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-sockets\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002502 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tb77\" (UniqueName: \"kubernetes.io/projected/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-kube-api-access-7tb77\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002552 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0015219d-ee39-40ea-896a-944b4b45e46b-frr-startup\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002579 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmgn\" (UniqueName: \"kubernetes.io/projected/cdef1951-7494-4265-9e1c-9098dab9c112-kube-api-access-mzmgn\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002638 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002715 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-metrics\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002771 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.002804 4729 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002860 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-reloader\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.002904 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs podName:0015219d-ee39-40ea-896a-944b4b45e46b nodeName:}" failed. No retries permitted until 2026-01-27 14:26:41.502865422 +0000 UTC m=+1288.087056426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs") pod "frr-k8s-ml79v" (UID: "0015219d-ee39-40ea-896a-944b4b45e46b") : secret "frr-k8s-certs-secret" not found Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.002933 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003065 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdef1951-7494-4265-9e1c-9098dab9c112-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003063 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-metrics\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003102 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldsvt\" (UniqueName: \"kubernetes.io/projected/0015219d-ee39-40ea-896a-944b4b45e46b-kube-api-access-ldsvt\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003268 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-frr-sockets\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003529 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0015219d-ee39-40ea-896a-944b4b45e46b-frr-startup\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.003561 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0015219d-ee39-40ea-896a-944b4b45e46b-reloader\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.019669 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cdef1951-7494-4265-9e1c-9098dab9c112-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.025413 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmgn\" (UniqueName: \"kubernetes.io/projected/cdef1951-7494-4265-9e1c-9098dab9c112-kube-api-access-mzmgn\") pod \"frr-k8s-webhook-server-7df86c4f6c-f7nvd\" (UID: \"cdef1951-7494-4265-9e1c-9098dab9c112\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.034572 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldsvt\" (UniqueName: \"kubernetes.io/projected/0015219d-ee39-40ea-896a-944b4b45e46b-kube-api-access-ldsvt\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105177 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105228 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xq4\" (UniqueName: \"kubernetes.io/projected/c39bec12-16a0-40f8-996b-ca212fedccc2-kube-api-access-v8xq4\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105255 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-metrics-certs\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.105351 4729 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.105397 4729 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105405 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metallb-excludel2\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.105427 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist podName:2069295e-9cb7-458a-b4f6-4f569b6e6a8e nodeName:}" failed. No retries permitted until 2026-01-27 14:26:41.605410874 +0000 UTC m=+1288.189601878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist") pod "speaker-k2h86" (UID: "2069295e-9cb7-458a-b4f6-4f569b6e6a8e") : secret "metallb-memberlist" not found Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.105454 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs podName:2069295e-9cb7-458a-b4f6-4f569b6e6a8e nodeName:}" failed. No retries permitted until 2026-01-27 14:26:41.605435655 +0000 UTC m=+1288.189626699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs") pod "speaker-k2h86" (UID: "2069295e-9cb7-458a-b4f6-4f569b6e6a8e") : secret "speaker-certs-secret" not found Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105516 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-cert\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.105555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tb77\" (UniqueName: \"kubernetes.io/projected/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-kube-api-access-7tb77\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.106204 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metallb-excludel2\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.117322 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.124160 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tb77\" (UniqueName: \"kubernetes.io/projected/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-kube-api-access-7tb77\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.208109 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8xq4\" (UniqueName: \"kubernetes.io/projected/c39bec12-16a0-40f8-996b-ca212fedccc2-kube-api-access-v8xq4\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.208533 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-metrics-certs\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.208660 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-cert\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.210842 4729 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.218643 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-metrics-certs\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.225350 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c39bec12-16a0-40f8-996b-ca212fedccc2-cert\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.254241 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8xq4\" (UniqueName: \"kubernetes.io/projected/c39bec12-16a0-40f8-996b-ca212fedccc2-kube-api-access-v8xq4\") pod \"controller-6968d8fdc4-wdllv\" (UID: \"c39bec12-16a0-40f8-996b-ca212fedccc2\") " pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.517793 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.526691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0015219d-ee39-40ea-896a-944b4b45e46b-metrics-certs\") pod \"frr-k8s-ml79v\" (UID: \"0015219d-ee39-40ea-896a-944b4b45e46b\") " pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.536701 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.619738 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.619801 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.619992 4729 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 14:26:41 crc kubenswrapper[4729]: E0127 14:26:41.620067 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist podName:2069295e-9cb7-458a-b4f6-4f569b6e6a8e nodeName:}" failed. No retries permitted until 2026-01-27 14:26:42.620050309 +0000 UTC m=+1289.204241313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist") pod "speaker-k2h86" (UID: "2069295e-9cb7-458a-b4f6-4f569b6e6a8e") : secret "metallb-memberlist" not found Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.623528 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-metrics-certs\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.668316 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd"] Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.706666 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.963443 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"f41a0bb4f142ca9074d1be7770dc57efdeae5feca781bd5208cc59637673018d"} Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.966171 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" event={"ID":"cdef1951-7494-4265-9e1c-9098dab9c112","Type":"ContainerStarted","Data":"10d3efbe1da3383b04286d29708079c7715ed9f0760454b2e237afae85408c7e"} Jan 27 14:26:41 crc kubenswrapper[4729]: I0127 14:26:41.971079 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wdllv"] Jan 27 14:26:41 crc kubenswrapper[4729]: W0127 14:26:41.977837 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc39bec12_16a0_40f8_996b_ca212fedccc2.slice/crio-e14725a5622e9289809f6879ea72303d61e4881b4d73225feeb03429304eaeb7 WatchSource:0}: Error finding container e14725a5622e9289809f6879ea72303d61e4881b4d73225feeb03429304eaeb7: Status 404 returned error can't find the container with id e14725a5622e9289809f6879ea72303d61e4881b4d73225feeb03429304eaeb7 Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.641296 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.647624 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2069295e-9cb7-458a-b4f6-4f569b6e6a8e-memberlist\") pod \"speaker-k2h86\" (UID: \"2069295e-9cb7-458a-b4f6-4f569b6e6a8e\") " pod="metallb-system/speaker-k2h86" Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.709203 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-k2h86" Jan 27 14:26:42 crc kubenswrapper[4729]: W0127 14:26:42.730112 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2069295e_9cb7_458a_b4f6_4f569b6e6a8e.slice/crio-604bf7f36c1ef8d4e70975dc3bff782926872465b216f8921ede16a9175698d8 WatchSource:0}: Error finding container 604bf7f36c1ef8d4e70975dc3bff782926872465b216f8921ede16a9175698d8: Status 404 returned error can't find the container with id 604bf7f36c1ef8d4e70975dc3bff782926872465b216f8921ede16a9175698d8 Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.978478 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wdllv" event={"ID":"c39bec12-16a0-40f8-996b-ca212fedccc2","Type":"ContainerStarted","Data":"c2a55e55ebf037d98bb2f97ded2db640cc8bc8093659573fdbceee1e5ad47149"} Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.978550 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.978568 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wdllv" event={"ID":"c39bec12-16a0-40f8-996b-ca212fedccc2","Type":"ContainerStarted","Data":"cd47eda45c51acbafb49437439542e968877bfae780ab161d341c77382ea38bf"} Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.978582 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wdllv" event={"ID":"c39bec12-16a0-40f8-996b-ca212fedccc2","Type":"ContainerStarted","Data":"e14725a5622e9289809f6879ea72303d61e4881b4d73225feeb03429304eaeb7"} Jan 27 14:26:42 crc kubenswrapper[4729]: I0127 14:26:42.980708 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k2h86" event={"ID":"2069295e-9cb7-458a-b4f6-4f569b6e6a8e","Type":"ContainerStarted","Data":"604bf7f36c1ef8d4e70975dc3bff782926872465b216f8921ede16a9175698d8"} Jan 27 14:26:43 crc kubenswrapper[4729]: I0127 14:26:43.000685 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-wdllv" podStartSLOduration=3.000668583 podStartE2EDuration="3.000668583s" podCreationTimestamp="2026-01-27 14:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:26:42.993981109 +0000 UTC m=+1289.578172113" watchObservedRunningTime="2026-01-27 14:26:43.000668583 +0000 UTC m=+1289.584859587" Jan 27 14:26:44 crc kubenswrapper[4729]: I0127 14:26:44.016430 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k2h86" event={"ID":"2069295e-9cb7-458a-b4f6-4f569b6e6a8e","Type":"ContainerStarted","Data":"2a3943de9970bf80c385b579e8cbcbbd4c7134282f894771c8331141b2b2d541"} Jan 27 14:26:44 crc kubenswrapper[4729]: I0127 14:26:44.016934 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-k2h86" Jan 27 14:26:44 crc kubenswrapper[4729]: I0127 14:26:44.016962 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-k2h86" event={"ID":"2069295e-9cb7-458a-b4f6-4f569b6e6a8e","Type":"ContainerStarted","Data":"b248740a0305e108f2731831fa2d2debeb6e4d109e3638d3174e01aa0ec4ed5e"} Jan 27 14:26:44 crc kubenswrapper[4729]: I0127 14:26:44.041763 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-k2h86" podStartSLOduration=4.041671142 podStartE2EDuration="4.041671142s" podCreationTimestamp="2026-01-27 14:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:26:44.038218948 +0000 UTC m=+1290.622409982" watchObservedRunningTime="2026-01-27 14:26:44.041671142 +0000 UTC m=+1290.625862166" Jan 27 14:26:51 crc kubenswrapper[4729]: I0127 14:26:51.084901 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" event={"ID":"cdef1951-7494-4265-9e1c-9098dab9c112","Type":"ContainerStarted","Data":"d649636f34a841590ed528bf9eedf3831873d1a71582272b50c4156865d0eaf2"} Jan 27 14:26:51 crc kubenswrapper[4729]: I0127 14:26:51.085980 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:26:51 crc kubenswrapper[4729]: I0127 14:26:51.090763 4729 generic.go:334] "Generic (PLEG): container finished" podID="0015219d-ee39-40ea-896a-944b4b45e46b" containerID="e255c2495fb29b133d414cd32df6408a32f4ad8c54d1c6d4313b17cc2eeb9819" exitCode=0 Jan 27 14:26:51 crc kubenswrapper[4729]: I0127 14:26:51.090812 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerDied","Data":"e255c2495fb29b133d414cd32df6408a32f4ad8c54d1c6d4313b17cc2eeb9819"} Jan 27 14:26:51 crc kubenswrapper[4729]: I0127 14:26:51.110373 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" podStartSLOduration=2.386108188 podStartE2EDuration="11.110346524s" podCreationTimestamp="2026-01-27 14:26:40 +0000 UTC" firstStartedPulling="2026-01-27 14:26:41.675093068 +0000 UTC m=+1288.259284072" lastFinishedPulling="2026-01-27 14:26:50.399331404 +0000 UTC m=+1296.983522408" observedRunningTime="2026-01-27 14:26:51.108664298 +0000 UTC m=+1297.692855302" watchObservedRunningTime="2026-01-27 14:26:51.110346524 +0000 UTC m=+1297.694537568" Jan 27 14:26:52 crc kubenswrapper[4729]: I0127 14:26:52.100660 4729 generic.go:334] "Generic (PLEG): container finished" podID="0015219d-ee39-40ea-896a-944b4b45e46b" containerID="8e609654cfdbbc72a92f841051d69a0daf48fb3bc0c1303d79b0521c49f717db" exitCode=0 Jan 27 14:26:52 crc kubenswrapper[4729]: I0127 14:26:52.100729 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerDied","Data":"8e609654cfdbbc72a92f841051d69a0daf48fb3bc0c1303d79b0521c49f717db"} Jan 27 14:26:52 crc kubenswrapper[4729]: I0127 14:26:52.655239 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:26:52 crc kubenswrapper[4729]: I0127 14:26:52.655295 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:26:52 crc kubenswrapper[4729]: I0127 14:26:52.713317 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-k2h86" Jan 27 14:26:53 crc kubenswrapper[4729]: I0127 14:26:53.111571 4729 generic.go:334] "Generic (PLEG): container finished" podID="0015219d-ee39-40ea-896a-944b4b45e46b" containerID="09c36287350d23c5f7f8bc32404af8544e1fdf0d97c064212bc87ec3859fd5e3" exitCode=0 Jan 27 14:26:53 crc kubenswrapper[4729]: I0127 14:26:53.111768 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerDied","Data":"09c36287350d23c5f7f8bc32404af8544e1fdf0d97c064212bc87ec3859fd5e3"} Jan 27 14:26:54 crc kubenswrapper[4729]: I0127 14:26:54.126806 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"f5da520ac4480ddc4c937e24574f27fd580d9dac08cf321d15ec7fcf117d4437"} Jan 27 14:26:54 crc kubenswrapper[4729]: I0127 14:26:54.127166 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"9644d7cc66370ad14e87f6f6e08056c5888331ab11eb72c0b9054c73178f76b3"} Jan 27 14:26:54 crc kubenswrapper[4729]: I0127 14:26:54.127184 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"75972eba205e3dc17a39d2c5f87d918826b421ddb9049029a6e2100ff0ba416a"} Jan 27 14:26:54 crc kubenswrapper[4729]: I0127 14:26:54.127194 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"f89c161495f94a20854babcc3d28b97723a47bf69832d910c59caba72ca42844"} Jan 27 14:26:54 crc kubenswrapper[4729]: I0127 14:26:54.127203 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"1076d3ff7d22ae676c5181888288a49be41510ec277acfde531c72e12e11abd3"} Jan 27 14:26:55 crc kubenswrapper[4729]: I0127 14:26:55.139267 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ml79v" event={"ID":"0015219d-ee39-40ea-896a-944b4b45e46b","Type":"ContainerStarted","Data":"19d799922884bef523ba15f02d8ea82a19b7e90253f39659a153011fe2ad32aa"} Jan 27 14:26:55 crc kubenswrapper[4729]: I0127 14:26:55.139453 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:55 crc kubenswrapper[4729]: I0127 14:26:55.172096 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ml79v" podStartSLOduration=6.689325897 podStartE2EDuration="15.172074229s" podCreationTimestamp="2026-01-27 14:26:40 +0000 UTC" firstStartedPulling="2026-01-27 14:26:41.893300503 +0000 UTC m=+1288.477491507" lastFinishedPulling="2026-01-27 14:26:50.376048835 +0000 UTC m=+1296.960239839" observedRunningTime="2026-01-27 14:26:55.166124796 +0000 UTC m=+1301.750315810" watchObservedRunningTime="2026-01-27 14:26:55.172074229 +0000 UTC m=+1301.756265233" Jan 27 14:26:56 crc kubenswrapper[4729]: I0127 14:26:56.707505 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:56 crc kubenswrapper[4729]: I0127 14:26:56.750260 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.595549 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7hw47"] Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.597284 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.599814 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.600605 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lpknx" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.604013 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.611963 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7hw47"] Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.699444 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nht\" (UniqueName: \"kubernetes.io/projected/19dcb1bc-8570-40b5-9493-349fc2ea4cc0-kube-api-access-m5nht\") pod \"openstack-operator-index-7hw47\" (UID: \"19dcb1bc-8570-40b5-9493-349fc2ea4cc0\") " pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.800912 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nht\" (UniqueName: \"kubernetes.io/projected/19dcb1bc-8570-40b5-9493-349fc2ea4cc0-kube-api-access-m5nht\") pod \"openstack-operator-index-7hw47\" (UID: \"19dcb1bc-8570-40b5-9493-349fc2ea4cc0\") " pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.835090 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nht\" (UniqueName: \"kubernetes.io/projected/19dcb1bc-8570-40b5-9493-349fc2ea4cc0-kube-api-access-m5nht\") pod \"openstack-operator-index-7hw47\" (UID: \"19dcb1bc-8570-40b5-9493-349fc2ea4cc0\") " pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:26:57 crc kubenswrapper[4729]: I0127 14:26:57.917811 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:26:58 crc kubenswrapper[4729]: I0127 14:26:58.412999 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7hw47"] Jan 27 14:26:58 crc kubenswrapper[4729]: W0127 14:26:58.419737 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19dcb1bc_8570_40b5_9493_349fc2ea4cc0.slice/crio-cc3050c64f7a09a8359f922613b58323272e259daa698e60cde426fa93531ba5 WatchSource:0}: Error finding container cc3050c64f7a09a8359f922613b58323272e259daa698e60cde426fa93531ba5: Status 404 returned error can't find the container with id cc3050c64f7a09a8359f922613b58323272e259daa698e60cde426fa93531ba5 Jan 27 14:26:59 crc kubenswrapper[4729]: I0127 14:26:59.172453 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7hw47" event={"ID":"19dcb1bc-8570-40b5-9493-349fc2ea4cc0","Type":"ContainerStarted","Data":"cc3050c64f7a09a8359f922613b58323272e259daa698e60cde426fa93531ba5"} Jan 27 14:27:01 crc kubenswrapper[4729]: I0127 14:27:01.123208 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-f7nvd" Jan 27 14:27:01 crc kubenswrapper[4729]: I0127 14:27:01.541020 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-wdllv" Jan 27 14:27:02 crc kubenswrapper[4729]: I0127 14:27:02.198939 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7hw47" event={"ID":"19dcb1bc-8570-40b5-9493-349fc2ea4cc0","Type":"ContainerStarted","Data":"122470bf9cb64f278858bd95e182c275e2fc3c09aebbf43d1997e2bd944fffcd"} Jan 27 14:27:02 crc kubenswrapper[4729]: I0127 14:27:02.217935 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7hw47" podStartSLOduration=2.365775551 podStartE2EDuration="5.217914343s" podCreationTimestamp="2026-01-27 14:26:57 +0000 UTC" firstStartedPulling="2026-01-27 14:26:58.421956077 +0000 UTC m=+1305.006147081" lastFinishedPulling="2026-01-27 14:27:01.274094869 +0000 UTC m=+1307.858285873" observedRunningTime="2026-01-27 14:27:02.215715903 +0000 UTC m=+1308.799906917" watchObservedRunningTime="2026-01-27 14:27:02.217914343 +0000 UTC m=+1308.802105347" Jan 27 14:27:07 crc kubenswrapper[4729]: I0127 14:27:07.919528 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:27:07 crc kubenswrapper[4729]: I0127 14:27:07.920098 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:27:07 crc kubenswrapper[4729]: I0127 14:27:07.954167 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:27:08 crc kubenswrapper[4729]: I0127 14:27:08.276116 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7hw47" Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.847637 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk"] Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.849742 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.852987 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pbjrw" Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.868082 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk"] Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.920925 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxkrp\" (UniqueName: \"kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.921156 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:09 crc kubenswrapper[4729]: I0127 14:27:09.921381 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.023471 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxkrp\" (UniqueName: \"kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.023533 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.023597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.024192 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.024275 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.050112 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxkrp\" (UniqueName: \"kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp\") pod \"1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.195945 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:10 crc kubenswrapper[4729]: I0127 14:27:10.686246 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk"] Jan 27 14:27:10 crc kubenswrapper[4729]: W0127 14:27:10.690533 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod384fac64_6243_404c_a413_49b548d4e510.slice/crio-2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af WatchSource:0}: Error finding container 2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af: Status 404 returned error can't find the container with id 2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af Jan 27 14:27:11 crc kubenswrapper[4729]: I0127 14:27:11.276458 4729 generic.go:334] "Generic (PLEG): container finished" podID="384fac64-6243-404c-a413-49b548d4e510" containerID="a9286ddee4feedab92cc88c18e7ddd0b17510ac93573d41e500895667e66b1bb" exitCode=0 Jan 27 14:27:11 crc kubenswrapper[4729]: I0127 14:27:11.276510 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerDied","Data":"a9286ddee4feedab92cc88c18e7ddd0b17510ac93573d41e500895667e66b1bb"} Jan 27 14:27:11 crc kubenswrapper[4729]: I0127 14:27:11.276542 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerStarted","Data":"2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af"} Jan 27 14:27:11 crc kubenswrapper[4729]: I0127 14:27:11.712682 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ml79v" Jan 27 14:27:16 crc kubenswrapper[4729]: I0127 14:27:16.322538 4729 generic.go:334] "Generic (PLEG): container finished" podID="384fac64-6243-404c-a413-49b548d4e510" containerID="efda826d84604c6d0265ab0c4891e3462dc930f8ff31d9bacb377d0a50f16860" exitCode=0 Jan 27 14:27:16 crc kubenswrapper[4729]: I0127 14:27:16.322588 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerDied","Data":"efda826d84604c6d0265ab0c4891e3462dc930f8ff31d9bacb377d0a50f16860"} Jan 27 14:27:17 crc kubenswrapper[4729]: I0127 14:27:17.333528 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerStarted","Data":"379aa33d428b9f1318b974ac1fe02c9158903b3997bf45732bceb5ab8a810c9f"} Jan 27 14:27:18 crc kubenswrapper[4729]: I0127 14:27:18.344360 4729 generic.go:334] "Generic (PLEG): container finished" podID="384fac64-6243-404c-a413-49b548d4e510" containerID="379aa33d428b9f1318b974ac1fe02c9158903b3997bf45732bceb5ab8a810c9f" exitCode=0 Jan 27 14:27:18 crc kubenswrapper[4729]: I0127 14:27:18.344462 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerDied","Data":"379aa33d428b9f1318b974ac1fe02c9158903b3997bf45732bceb5ab8a810c9f"} Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.739830 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.906075 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxkrp\" (UniqueName: \"kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp\") pod \"384fac64-6243-404c-a413-49b548d4e510\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.906277 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle\") pod \"384fac64-6243-404c-a413-49b548d4e510\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.906322 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util\") pod \"384fac64-6243-404c-a413-49b548d4e510\" (UID: \"384fac64-6243-404c-a413-49b548d4e510\") " Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.908469 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle" (OuterVolumeSpecName: "bundle") pod "384fac64-6243-404c-a413-49b548d4e510" (UID: "384fac64-6243-404c-a413-49b548d4e510"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.912138 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp" (OuterVolumeSpecName: "kube-api-access-lxkrp") pod "384fac64-6243-404c-a413-49b548d4e510" (UID: "384fac64-6243-404c-a413-49b548d4e510"). InnerVolumeSpecName "kube-api-access-lxkrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:27:19 crc kubenswrapper[4729]: I0127 14:27:19.918268 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util" (OuterVolumeSpecName: "util") pod "384fac64-6243-404c-a413-49b548d4e510" (UID: "384fac64-6243-404c-a413-49b548d4e510"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.008241 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxkrp\" (UniqueName: \"kubernetes.io/projected/384fac64-6243-404c-a413-49b548d4e510-kube-api-access-lxkrp\") on node \"crc\" DevicePath \"\"" Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.008288 4729 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.008299 4729 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/384fac64-6243-404c-a413-49b548d4e510-util\") on node \"crc\" DevicePath \"\"" Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.362464 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" event={"ID":"384fac64-6243-404c-a413-49b548d4e510","Type":"ContainerDied","Data":"2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af"} Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.362514 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a76f0172b42b1fc570b74ae526a25df5ce9bcea709630e4df67c439b63b03af" Jan 27 14:27:20 crc kubenswrapper[4729]: I0127 14:27:20.362562 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk" Jan 27 14:27:22 crc kubenswrapper[4729]: I0127 14:27:22.655471 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:27:22 crc kubenswrapper[4729]: I0127 14:27:22.655918 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:27:22 crc kubenswrapper[4729]: I0127 14:27:22.655988 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:27:22 crc kubenswrapper[4729]: I0127 14:27:22.656819 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:27:22 crc kubenswrapper[4729]: I0127 14:27:22.656901 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe" gracePeriod=600 Jan 27 14:27:23 crc kubenswrapper[4729]: I0127 14:27:23.386684 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe" exitCode=0 Jan 27 14:27:23 crc kubenswrapper[4729]: I0127 14:27:23.386759 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe"} Jan 27 14:27:23 crc kubenswrapper[4729]: I0127 14:27:23.387289 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898"} Jan 27 14:27:23 crc kubenswrapper[4729]: I0127 14:27:23.387312 4729 scope.go:117] "RemoveContainer" containerID="f3482ae2a5a3f30c725532e5e8607ac477190fd04992775cb1a4c3b191b4ca65" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.866468 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn"] Jan 27 14:27:28 crc kubenswrapper[4729]: E0127 14:27:28.867579 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="extract" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.867600 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="extract" Jan 27 14:27:28 crc kubenswrapper[4729]: E0127 14:27:28.867661 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="util" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.867670 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="util" Jan 27 14:27:28 crc kubenswrapper[4729]: E0127 14:27:28.867683 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="pull" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.867691 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="pull" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.867853 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="384fac64-6243-404c-a413-49b548d4e510" containerName="extract" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.868734 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.872224 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2fs66" Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.892291 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn"] Jan 27 14:27:28 crc kubenswrapper[4729]: I0127 14:27:28.962961 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5hd\" (UniqueName: \"kubernetes.io/projected/3cda87f1-f88f-4ade-a0fd-d0359a00e665-kube-api-access-pr5hd\") pod \"openstack-operator-controller-init-77cf586fbc-wj4vn\" (UID: \"3cda87f1-f88f-4ade-a0fd-d0359a00e665\") " pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:29 crc kubenswrapper[4729]: I0127 14:27:29.064526 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5hd\" (UniqueName: \"kubernetes.io/projected/3cda87f1-f88f-4ade-a0fd-d0359a00e665-kube-api-access-pr5hd\") pod \"openstack-operator-controller-init-77cf586fbc-wj4vn\" (UID: \"3cda87f1-f88f-4ade-a0fd-d0359a00e665\") " pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:29 crc kubenswrapper[4729]: I0127 14:27:29.097133 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5hd\" (UniqueName: \"kubernetes.io/projected/3cda87f1-f88f-4ade-a0fd-d0359a00e665-kube-api-access-pr5hd\") pod \"openstack-operator-controller-init-77cf586fbc-wj4vn\" (UID: \"3cda87f1-f88f-4ade-a0fd-d0359a00e665\") " pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:29 crc kubenswrapper[4729]: I0127 14:27:29.189358 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:29 crc kubenswrapper[4729]: I0127 14:27:29.646492 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn"] Jan 27 14:27:30 crc kubenswrapper[4729]: I0127 14:27:30.441505 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" event={"ID":"3cda87f1-f88f-4ade-a0fd-d0359a00e665","Type":"ContainerStarted","Data":"b5d98b48475c83b61050e139a500439a49de72e27a40998961054b4a78037e92"} Jan 27 14:27:35 crc kubenswrapper[4729]: I0127 14:27:35.478292 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" event={"ID":"3cda87f1-f88f-4ade-a0fd-d0359a00e665","Type":"ContainerStarted","Data":"a45b7d6e4113867173ff8229ced812ce9fca70547e502fb987a8a62e39fb92c7"} Jan 27 14:27:36 crc kubenswrapper[4729]: I0127 14:27:36.485834 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:27:36 crc kubenswrapper[4729]: I0127 14:27:36.518666 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" podStartSLOduration=3.103384569 podStartE2EDuration="8.518647975s" podCreationTimestamp="2026-01-27 14:27:28 +0000 UTC" firstStartedPulling="2026-01-27 14:27:29.656404955 +0000 UTC m=+1336.240595959" lastFinishedPulling="2026-01-27 14:27:35.071668361 +0000 UTC m=+1341.655859365" observedRunningTime="2026-01-27 14:27:36.514306556 +0000 UTC m=+1343.098497560" watchObservedRunningTime="2026-01-27 14:27:36.518647975 +0000 UTC m=+1343.102838979" Jan 27 14:27:49 crc kubenswrapper[4729]: I0127 14:27:49.193107 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-77cf586fbc-wj4vn" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.416212 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.418071 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:28:18 crc kubenswrapper[4729]: W0127 14:28:18.421915 4729 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7qtdh": failed to list *v1.Secret: secrets "cinder-operator-controller-manager-dockercfg-7qtdh" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Jan 27 14:28:18 crc kubenswrapper[4729]: E0127 14:28:18.422273 4729 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-7qtdh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-operator-controller-manager-dockercfg-7qtdh\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.423630 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.424745 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.428410 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xmv6d" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.432820 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.442058 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.471890 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.472993 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.477387 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xp4d4" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.512194 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfwx\" (UniqueName: \"kubernetes.io/projected/27edcc9a-7976-42bd-9e8b-a7c95936f305-kube-api-access-ntfwx\") pod \"cinder-operator-controller-manager-7478f7dbf9-m7jfx\" (UID: \"27edcc9a-7976-42bd-9e8b-a7c95936f305\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.512304 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7hjr\" (UniqueName: \"kubernetes.io/projected/a27299b3-aeb1-4014-a145-6b5b908542fc-kube-api-access-d7hjr\") pod \"barbican-operator-controller-manager-7f86f8796f-fcvrz\" (UID: \"a27299b3-aeb1-4014-a145-6b5b908542fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.520456 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.557974 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.559271 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.562534 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.564857 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-gnrb7" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.572658 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.574457 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mrxrq" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.576383 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.603282 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.615716 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbn8s\" (UniqueName: \"kubernetes.io/projected/6818e775-019d-4bda-94ba-b7e550c9a127-kube-api-access-hbn8s\") pod \"designate-operator-controller-manager-b45d7bf98-4bpwj\" (UID: \"6818e775-019d-4bda-94ba-b7e550c9a127\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.615847 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntfwx\" (UniqueName: \"kubernetes.io/projected/27edcc9a-7976-42bd-9e8b-a7c95936f305-kube-api-access-ntfwx\") pod \"cinder-operator-controller-manager-7478f7dbf9-m7jfx\" (UID: \"27edcc9a-7976-42bd-9e8b-a7c95936f305\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.615952 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7hjr\" (UniqueName: \"kubernetes.io/projected/a27299b3-aeb1-4014-a145-6b5b908542fc-kube-api-access-d7hjr\") pod \"barbican-operator-controller-manager-7f86f8796f-fcvrz\" (UID: \"a27299b3-aeb1-4014-a145-6b5b908542fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.616481 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.617739 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.627628 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.628660 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.629265 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pf769" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.639090 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.642931 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.651209 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-8tcb9" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.665522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7hjr\" (UniqueName: \"kubernetes.io/projected/a27299b3-aeb1-4014-a145-6b5b908542fc-kube-api-access-d7hjr\") pod \"barbican-operator-controller-manager-7f86f8796f-fcvrz\" (UID: \"a27299b3-aeb1-4014-a145-6b5b908542fc\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.665598 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.685489 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntfwx\" (UniqueName: \"kubernetes.io/projected/27edcc9a-7976-42bd-9e8b-a7c95936f305-kube-api-access-ntfwx\") pod \"cinder-operator-controller-manager-7478f7dbf9-m7jfx\" (UID: \"27edcc9a-7976-42bd-9e8b-a7c95936f305\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719485 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq5sz\" (UniqueName: \"kubernetes.io/projected/64a73e98-23a2-4634-ba0f-fcf5389e38e1-kube-api-access-jq5sz\") pod \"horizon-operator-controller-manager-77d5c5b54f-qlm8l\" (UID: \"64a73e98-23a2-4634-ba0f-fcf5389e38e1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719578 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcq9j\" (UniqueName: \"kubernetes.io/projected/ef99dd2b-4274-4277-8517-c748ef232c38-kube-api-access-fcq9j\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719648 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jtg\" (UniqueName: \"kubernetes.io/projected/48e0b394-ae44-484e-821f-b821cd11c656-kube-api-access-h5jtg\") pod \"heat-operator-controller-manager-594c8c9d5d-5csls\" (UID: \"48e0b394-ae44-484e-821f-b821cd11c656\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719692 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c874n\" (UniqueName: \"kubernetes.io/projected/c9cd8871-5d83-436f-b787-a8769327429d-kube-api-access-c874n\") pod \"glance-operator-controller-manager-78fdd796fd-ck286\" (UID: \"c9cd8871-5d83-436f-b787-a8769327429d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719776 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbn8s\" (UniqueName: \"kubernetes.io/projected/6818e775-019d-4bda-94ba-b7e550c9a127-kube-api-access-hbn8s\") pod \"designate-operator-controller-manager-b45d7bf98-4bpwj\" (UID: \"6818e775-019d-4bda-94ba-b7e550c9a127\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.719825 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.754173 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbn8s\" (UniqueName: \"kubernetes.io/projected/6818e775-019d-4bda-94ba-b7e550c9a127-kube-api-access-hbn8s\") pod \"designate-operator-controller-manager-b45d7bf98-4bpwj\" (UID: \"6818e775-019d-4bda-94ba-b7e550c9a127\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.759656 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.773819 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.789286 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.798621 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qmb4l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.808524 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.828186 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.830620 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq5sz\" (UniqueName: \"kubernetes.io/projected/64a73e98-23a2-4634-ba0f-fcf5389e38e1-kube-api-access-jq5sz\") pod \"horizon-operator-controller-manager-77d5c5b54f-qlm8l\" (UID: \"64a73e98-23a2-4634-ba0f-fcf5389e38e1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.830698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcq9j\" (UniqueName: \"kubernetes.io/projected/ef99dd2b-4274-4277-8517-c748ef232c38-kube-api-access-fcq9j\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.830795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jtg\" (UniqueName: \"kubernetes.io/projected/48e0b394-ae44-484e-821f-b821cd11c656-kube-api-access-h5jtg\") pod \"heat-operator-controller-manager-594c8c9d5d-5csls\" (UID: \"48e0b394-ae44-484e-821f-b821cd11c656\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.830842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c874n\" (UniqueName: \"kubernetes.io/projected/c9cd8871-5d83-436f-b787-a8769327429d-kube-api-access-c874n\") pod \"glance-operator-controller-manager-78fdd796fd-ck286\" (UID: \"c9cd8871-5d83-436f-b787-a8769327429d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.830954 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: E0127 14:28:18.831116 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:18 crc kubenswrapper[4729]: E0127 14:28:18.831191 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert podName:ef99dd2b-4274-4277-8517-c748ef232c38 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:19.331168065 +0000 UTC m=+1385.915359079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert") pod "infra-operator-controller-manager-694cf4f878-vm6sf" (UID: "ef99dd2b-4274-4277-8517-c748ef232c38") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.836735 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.841278 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.844320 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-nqrlb" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.873048 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c874n\" (UniqueName: \"kubernetes.io/projected/c9cd8871-5d83-436f-b787-a8769327429d-kube-api-access-c874n\") pod \"glance-operator-controller-manager-78fdd796fd-ck286\" (UID: \"c9cd8871-5d83-436f-b787-a8769327429d\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.873334 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.881193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jtg\" (UniqueName: \"kubernetes.io/projected/48e0b394-ae44-484e-821f-b821cd11c656-kube-api-access-h5jtg\") pod \"heat-operator-controller-manager-594c8c9d5d-5csls\" (UID: \"48e0b394-ae44-484e-821f-b821cd11c656\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.890075 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.892570 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcq9j\" (UniqueName: \"kubernetes.io/projected/ef99dd2b-4274-4277-8517-c748ef232c38-kube-api-access-fcq9j\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.912921 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.918492 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.926781 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.931859 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq5sz\" (UniqueName: \"kubernetes.io/projected/64a73e98-23a2-4634-ba0f-fcf5389e38e1-kube-api-access-jq5sz\") pod \"horizon-operator-controller-manager-77d5c5b54f-qlm8l\" (UID: \"64a73e98-23a2-4634-ba0f-fcf5389e38e1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.941960 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkdpc\" (UniqueName: \"kubernetes.io/projected/53268481-b675-416f-a9d3-343d349e3bb4-kube-api-access-pkdpc\") pod \"ironic-operator-controller-manager-598f7747c9-rnnng\" (UID: \"53268481-b675-416f-a9d3-343d349e3bb4\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.942052 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjdw\" (UniqueName: \"kubernetes.io/projected/80666255-494b-4c9a-8434-49c509505a32-kube-api-access-wfjdw\") pod \"keystone-operator-controller-manager-b8b6d4659-bfbgb\" (UID: \"80666255-494b-4c9a-8434-49c509505a32\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.948718 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wqsnz" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.955950 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j"] Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.957019 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.961243 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.971772 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tsv5w" Jan 27 14:28:18 crc kubenswrapper[4729]: I0127 14:28:18.984750 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.002582 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.043640 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjdw\" (UniqueName: \"kubernetes.io/projected/80666255-494b-4c9a-8434-49c509505a32-kube-api-access-wfjdw\") pod \"keystone-operator-controller-manager-b8b6d4659-bfbgb\" (UID: \"80666255-494b-4c9a-8434-49c509505a32\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.043717 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp7fq\" (UniqueName: \"kubernetes.io/projected/49b9ec9d-9998-465c-b62f-5c97d5913dd7-kube-api-access-bp7fq\") pod \"manila-operator-controller-manager-78c6999f6f-h98cg\" (UID: \"49b9ec9d-9998-465c-b62f-5c97d5913dd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.043783 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lwt\" (UniqueName: \"kubernetes.io/projected/e0d9910b-f1f9-4f1e-b920-dd1c3c787f78-kube-api-access-t6lwt\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hn29j\" (UID: \"e0d9910b-f1f9-4f1e-b920-dd1c3c787f78\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.043932 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkdpc\" (UniqueName: \"kubernetes.io/projected/53268481-b675-416f-a9d3-343d349e3bb4-kube-api-access-pkdpc\") pod \"ironic-operator-controller-manager-598f7747c9-rnnng\" (UID: \"53268481-b675-416f-a9d3-343d349e3bb4\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.059218 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.060517 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.074364 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-76mfr" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.089196 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkdpc\" (UniqueName: \"kubernetes.io/projected/53268481-b675-416f-a9d3-343d349e3bb4-kube-api-access-pkdpc\") pod \"ironic-operator-controller-manager-598f7747c9-rnnng\" (UID: \"53268481-b675-416f-a9d3-343d349e3bb4\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.119662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjdw\" (UniqueName: \"kubernetes.io/projected/80666255-494b-4c9a-8434-49c509505a32-kube-api-access-wfjdw\") pod \"keystone-operator-controller-manager-b8b6d4659-bfbgb\" (UID: \"80666255-494b-4c9a-8434-49c509505a32\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.125131 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.127270 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.137834 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2l8pn" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.152544 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqxw\" (UniqueName: \"kubernetes.io/projected/eb48ac92-5355-41a2-bdce-f70e47cb91d9-kube-api-access-ngqxw\") pod \"neutron-operator-controller-manager-78d58447c5-54vz9\" (UID: \"eb48ac92-5355-41a2-bdce-f70e47cb91d9\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.152623 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp7fq\" (UniqueName: \"kubernetes.io/projected/49b9ec9d-9998-465c-b62f-5c97d5913dd7-kube-api-access-bp7fq\") pod \"manila-operator-controller-manager-78c6999f6f-h98cg\" (UID: \"49b9ec9d-9998-465c-b62f-5c97d5913dd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.152684 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lwt\" (UniqueName: \"kubernetes.io/projected/e0d9910b-f1f9-4f1e-b920-dd1c3c787f78-kube-api-access-t6lwt\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hn29j\" (UID: \"e0d9910b-f1f9-4f1e-b920-dd1c3c787f78\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.161951 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.199503 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp7fq\" (UniqueName: \"kubernetes.io/projected/49b9ec9d-9998-465c-b62f-5c97d5913dd7-kube-api-access-bp7fq\") pod \"manila-operator-controller-manager-78c6999f6f-h98cg\" (UID: \"49b9ec9d-9998-465c-b62f-5c97d5913dd7\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.201498 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lwt\" (UniqueName: \"kubernetes.io/projected/e0d9910b-f1f9-4f1e-b920-dd1c3c787f78-kube-api-access-t6lwt\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-hn29j\" (UID: \"e0d9910b-f1f9-4f1e-b920-dd1c3c787f78\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.218194 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.221135 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.237714 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.257756 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5h7w\" (UniqueName: \"kubernetes.io/projected/73a5b611-6e78-44bf-94ad-2a1fdf4a4819-kube-api-access-l5h7w\") pod \"nova-operator-controller-manager-7bdb645866-w79q5\" (UID: \"73a5b611-6e78-44bf-94ad-2a1fdf4a4819\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.257818 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqxw\" (UniqueName: \"kubernetes.io/projected/eb48ac92-5355-41a2-bdce-f70e47cb91d9-kube-api-access-ngqxw\") pod \"neutron-operator-controller-manager-78d58447c5-54vz9\" (UID: \"eb48ac92-5355-41a2-bdce-f70e47cb91d9\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.277344 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.278446 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.287822 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sw8d2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.297331 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.307982 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.315996 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqxw\" (UniqueName: \"kubernetes.io/projected/eb48ac92-5355-41a2-bdce-f70e47cb91d9-kube-api-access-ngqxw\") pod \"neutron-operator-controller-manager-78d58447c5-54vz9\" (UID: \"eb48ac92-5355-41a2-bdce-f70e47cb91d9\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.323669 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.362971 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.364078 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.369271 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mvqfz" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.371544 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.371609 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k5mt\" (UniqueName: \"kubernetes.io/projected/d9c052a4-bb18-4634-8acd-13d899dcc8af-kube-api-access-8k5mt\") pod \"octavia-operator-controller-manager-5f4cd88d46-dpnb5\" (UID: \"d9c052a4-bb18-4634-8acd-13d899dcc8af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.371647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5h7w\" (UniqueName: \"kubernetes.io/projected/73a5b611-6e78-44bf-94ad-2a1fdf4a4819-kube-api-access-l5h7w\") pod \"nova-operator-controller-manager-7bdb645866-w79q5\" (UID: \"73a5b611-6e78-44bf-94ad-2a1fdf4a4819\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:19 crc kubenswrapper[4729]: E0127 14:28:19.372253 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:19 crc kubenswrapper[4729]: E0127 14:28:19.372304 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert podName:ef99dd2b-4274-4277-8517-c748ef232c38 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:20.372285128 +0000 UTC m=+1386.956476132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert") pod "infra-operator-controller-manager-694cf4f878-vm6sf" (UID: "ef99dd2b-4274-4277-8517-c748ef232c38") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.408931 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.410120 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.425057 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5h7w\" (UniqueName: \"kubernetes.io/projected/73a5b611-6e78-44bf-94ad-2a1fdf4a4819-kube-api-access-l5h7w\") pod \"nova-operator-controller-manager-7bdb645866-w79q5\" (UID: \"73a5b611-6e78-44bf-94ad-2a1fdf4a4819\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.425239 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.437272 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.438182 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5tnln" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.439487 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.443948 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.461208 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.462288 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.469382 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-44wq9" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.476225 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k5mt\" (UniqueName: \"kubernetes.io/projected/d9c052a4-bb18-4634-8acd-13d899dcc8af-kube-api-access-8k5mt\") pod \"octavia-operator-controller-manager-5f4cd88d46-dpnb5\" (UID: \"d9c052a4-bb18-4634-8acd-13d899dcc8af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.476295 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2gw\" (UniqueName: \"kubernetes.io/projected/e1ef0def-8b43-404c-a20f-ccffb028796d-kube-api-access-zh2gw\") pod \"ovn-operator-controller-manager-6f75f45d54-7q6dx\" (UID: \"e1ef0def-8b43-404c-a20f-ccffb028796d\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.476365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vx4\" (UniqueName: \"kubernetes.io/projected/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-kube-api-access-42vx4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.476423 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.495210 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.500368 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.553988 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k5mt\" (UniqueName: \"kubernetes.io/projected/d9c052a4-bb18-4634-8acd-13d899dcc8af-kube-api-access-8k5mt\") pod \"octavia-operator-controller-manager-5f4cd88d46-dpnb5\" (UID: \"d9c052a4-bb18-4634-8acd-13d899dcc8af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.563062 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.564579 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.566447 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w78vh" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.576045 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.577528 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.577649 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjqg\" (UniqueName: \"kubernetes.io/projected/a393649b-f1a3-44fb-9cb8-a289fcc3f01f-kube-api-access-ltjqg\") pod \"placement-operator-controller-manager-79d5ccc684-22t59\" (UID: \"a393649b-f1a3-44fb-9cb8-a289fcc3f01f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.577760 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh2gw\" (UniqueName: \"kubernetes.io/projected/e1ef0def-8b43-404c-a20f-ccffb028796d-kube-api-access-zh2gw\") pod \"ovn-operator-controller-manager-6f75f45d54-7q6dx\" (UID: \"e1ef0def-8b43-404c-a20f-ccffb028796d\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.577796 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42vx4\" (UniqueName: \"kubernetes.io/projected/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-kube-api-access-42vx4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: E0127 14:28:19.577867 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:19 crc kubenswrapper[4729]: E0127 14:28:19.577960 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert podName:5a91ffd3-fab5-40f6-b808-7d0fd80888aa nodeName:}" failed. No retries permitted until 2026-01-27 14:28:20.077938882 +0000 UTC m=+1386.662129886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" (UID: "5a91ffd3-fab5-40f6-b808-7d0fd80888aa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.582243 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.583324 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.590126 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-mlwj2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.604175 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.607584 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vx4\" (UniqueName: \"kubernetes.io/projected/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-kube-api-access-42vx4\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.620788 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.624390 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.627437 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-l7djm" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.630779 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh2gw\" (UniqueName: \"kubernetes.io/projected/e1ef0def-8b43-404c-a20f-ccffb028796d-kube-api-access-zh2gw\") pod \"ovn-operator-controller-manager-6f75f45d54-7q6dx\" (UID: \"e1ef0def-8b43-404c-a20f-ccffb028796d\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.660194 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.663365 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.694001 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjqg\" (UniqueName: \"kubernetes.io/projected/a393649b-f1a3-44fb-9cb8-a289fcc3f01f-kube-api-access-ltjqg\") pod \"placement-operator-controller-manager-79d5ccc684-22t59\" (UID: \"a393649b-f1a3-44fb-9cb8-a289fcc3f01f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.694104 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mpwp\" (UniqueName: \"kubernetes.io/projected/60c59a6c-9eb5-4869-8d50-2cb234912d6b-kube-api-access-2mpwp\") pod \"swift-operator-controller-manager-547cbdb99f-m4z7c\" (UID: \"60c59a6c-9eb5-4869-8d50-2cb234912d6b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.694254 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zgcq\" (UniqueName: \"kubernetes.io/projected/8eb4b08e-edb2-4db8-af1e-549e8e1396d1-kube-api-access-5zgcq\") pod \"telemetry-operator-controller-manager-66f997549c-st8m2\" (UID: \"8eb4b08e-edb2-4db8-af1e-549e8e1396d1\") " pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.703926 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.721403 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-9c4lk"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.725143 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.728163 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w58h4" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.737967 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-9c4lk"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.750655 4729 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.750728 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.776707 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjqg\" (UniqueName: \"kubernetes.io/projected/a393649b-f1a3-44fb-9cb8-a289fcc3f01f-kube-api-access-ltjqg\") pod \"placement-operator-controller-manager-79d5ccc684-22t59\" (UID: \"a393649b-f1a3-44fb-9cb8-a289fcc3f01f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.795537 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zgcq\" (UniqueName: \"kubernetes.io/projected/8eb4b08e-edb2-4db8-af1e-549e8e1396d1-kube-api-access-5zgcq\") pod \"telemetry-operator-controller-manager-66f997549c-st8m2\" (UID: \"8eb4b08e-edb2-4db8-af1e-549e8e1396d1\") " pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.795623 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmnnv\" (UniqueName: \"kubernetes.io/projected/d5d5726b-1680-44de-9752-2e56e45a3d12-kube-api-access-cmnnv\") pod \"test-operator-controller-manager-69797bbcbd-mxmhp\" (UID: \"d5d5726b-1680-44de-9752-2e56e45a3d12\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.795689 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mpwp\" (UniqueName: \"kubernetes.io/projected/60c59a6c-9eb5-4869-8d50-2cb234912d6b-kube-api-access-2mpwp\") pod \"swift-operator-controller-manager-547cbdb99f-m4z7c\" (UID: \"60c59a6c-9eb5-4869-8d50-2cb234912d6b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.795769 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfmj\" (UniqueName: \"kubernetes.io/projected/7f77a2ce-03ee-4d74-a7df-052255e0f337-kube-api-access-wqfmj\") pod \"watcher-operator-controller-manager-564965969-9c4lk\" (UID: \"7f77a2ce-03ee-4d74-a7df-052255e0f337\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.804770 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.807211 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.811852 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pxxcc" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.812045 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.812152 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.836792 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.841811 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mpwp\" (UniqueName: \"kubernetes.io/projected/60c59a6c-9eb5-4869-8d50-2cb234912d6b-kube-api-access-2mpwp\") pod \"swift-operator-controller-manager-547cbdb99f-m4z7c\" (UID: \"60c59a6c-9eb5-4869-8d50-2cb234912d6b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.847692 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zgcq\" (UniqueName: \"kubernetes.io/projected/8eb4b08e-edb2-4db8-af1e-549e8e1396d1-kube-api-access-5zgcq\") pod \"telemetry-operator-controller-manager-66f997549c-st8m2\" (UID: \"8eb4b08e-edb2-4db8-af1e-549e8e1396d1\") " pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.856534 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.858034 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.861428 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-rlfzs" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.872425 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr"] Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.900274 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.900349 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqfmj\" (UniqueName: \"kubernetes.io/projected/7f77a2ce-03ee-4d74-a7df-052255e0f337-kube-api-access-wqfmj\") pod \"watcher-operator-controller-manager-564965969-9c4lk\" (UID: \"7f77a2ce-03ee-4d74-a7df-052255e0f337\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.900484 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.900521 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngv42\" (UniqueName: \"kubernetes.io/projected/4a222b58-d97f-4d40-9bb4-517b4798eb07-kube-api-access-ngv42\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.900563 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmnnv\" (UniqueName: \"kubernetes.io/projected/d5d5726b-1680-44de-9752-2e56e45a3d12-kube-api-access-cmnnv\") pod \"test-operator-controller-manager-69797bbcbd-mxmhp\" (UID: \"d5d5726b-1680-44de-9752-2e56e45a3d12\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.928750 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmnnv\" (UniqueName: \"kubernetes.io/projected/d5d5726b-1680-44de-9752-2e56e45a3d12-kube-api-access-cmnnv\") pod \"test-operator-controller-manager-69797bbcbd-mxmhp\" (UID: \"d5d5726b-1680-44de-9752-2e56e45a3d12\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.929635 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqfmj\" (UniqueName: \"kubernetes.io/projected/7f77a2ce-03ee-4d74-a7df-052255e0f337-kube-api-access-wqfmj\") pod \"watcher-operator-controller-manager-564965969-9c4lk\" (UID: \"7f77a2ce-03ee-4d74-a7df-052255e0f337\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:19 crc kubenswrapper[4729]: I0127 14:28:19.947842 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7qtdh" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.003739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.003854 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngv42\" (UniqueName: \"kubernetes.io/projected/4a222b58-d97f-4d40-9bb4-517b4798eb07-kube-api-access-ngv42\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.004089 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vplf4\" (UniqueName: \"kubernetes.io/projected/6e8131d6-585c-43f8-9231-204ef68de1ba-kube-api-access-vplf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-khqrr\" (UID: \"6e8131d6-585c-43f8-9231-204ef68de1ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.004093 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.004194 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:20.504168596 +0000 UTC m=+1387.088359780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.004211 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.004527 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.004604 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:20.504581297 +0000 UTC m=+1387.088772301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "metrics-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.045648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngv42\" (UniqueName: \"kubernetes.io/projected/4a222b58-d97f-4d40-9bb4-517b4798eb07-kube-api-access-ngv42\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.102465 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.105561 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vplf4\" (UniqueName: \"kubernetes.io/projected/6e8131d6-585c-43f8-9231-204ef68de1ba-kube-api-access-vplf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-khqrr\" (UID: \"6e8131d6-585c-43f8-9231-204ef68de1ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.105860 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.108807 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.108900 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert podName:5a91ffd3-fab5-40f6-b808-7d0fd80888aa nodeName:}" failed. No retries permitted until 2026-01-27 14:28:21.108858969 +0000 UTC m=+1387.693049973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" (UID: "5a91ffd3-fab5-40f6-b808-7d0fd80888aa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.122575 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.127954 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vplf4\" (UniqueName: \"kubernetes.io/projected/6e8131d6-585c-43f8-9231-204ef68de1ba-kube-api-access-vplf4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-khqrr\" (UID: \"6e8131d6-585c-43f8-9231-204ef68de1ba\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.209119 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.224353 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.261313 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.296092 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.419182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.419432 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.420095 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert podName:ef99dd2b-4274-4277-8517-c748ef232c38 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:22.420077039 +0000 UTC m=+1389.004268043 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert") pod "infra-operator-controller-manager-694cf4f878-vm6sf" (UID: "ef99dd2b-4274-4277-8517-c748ef232c38") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.521691 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.521807 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.521833 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.521979 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:21.521962256 +0000 UTC m=+1388.106153260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "webhook-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.521990 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: E0127 14:28:20.522035 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:21.522022127 +0000 UTC m=+1388.106213131 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "metrics-server-cert" not found Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.758042 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286"] Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.792159 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls"] Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.821031 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l"] Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.840128 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj"] Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.854889 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz"] Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.886807 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" event={"ID":"6818e775-019d-4bda-94ba-b7e550c9a127","Type":"ContainerStarted","Data":"ae0447771bb8b96544b02ac5735c7a9c22493a1ffd8f199e8ebbef72133aa90c"} Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.889228 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" event={"ID":"48e0b394-ae44-484e-821f-b821cd11c656","Type":"ContainerStarted","Data":"114f79b5adc720c0dbf494bd9092afd58f66964766ff80dadb3e23ac96efca5e"} Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.893494 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" event={"ID":"c9cd8871-5d83-436f-b787-a8769327429d","Type":"ContainerStarted","Data":"9643e9e841116d346e6a0e4b4977f20a3ac762a884f1d3f0540be91e9585af9e"} Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.899665 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" event={"ID":"64a73e98-23a2-4634-ba0f-fcf5389e38e1","Type":"ContainerStarted","Data":"781af4776296aa3245f1485f805410b9e56bd3ae0e64dc5718a75900c9195ba5"} Jan 27 14:28:20 crc kubenswrapper[4729]: I0127 14:28:20.902346 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" event={"ID":"a27299b3-aeb1-4014-a145-6b5b908542fc","Type":"ContainerStarted","Data":"736bbdb055f35f5311ef42cbde61b9e1c07e48d815eaf68a0617c382d8d18988"} Jan 27 14:28:21 crc kubenswrapper[4729]: I0127 14:28:21.136591 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.137262 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.137321 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert podName:5a91ffd3-fab5-40f6-b808-7d0fd80888aa nodeName:}" failed. No retries permitted until 2026-01-27 14:28:23.137302734 +0000 UTC m=+1389.721493738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" (UID: "5a91ffd3-fab5-40f6-b808-7d0fd80888aa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:21 crc kubenswrapper[4729]: I0127 14:28:21.545716 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:21 crc kubenswrapper[4729]: I0127 14:28:21.545893 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.546000 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.546054 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:23.546039483 +0000 UTC m=+1390.130230487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "webhook-server-cert" not found Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.546057 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:28:21 crc kubenswrapper[4729]: E0127 14:28:21.546090 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:23.546079924 +0000 UTC m=+1390.130270928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "metrics-server-cert" not found Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.015443 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.077732 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng"] Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.078800 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53268481_b675_416f_a9d3_343d349e3bb4.slice/crio-0a7839983e7c0f03078d2c26f91e3a615a6a8333986e664d02a50dcdb74146c5 WatchSource:0}: Error finding container 0a7839983e7c0f03078d2c26f91e3a615a6a8333986e664d02a50dcdb74146c5: Status 404 returned error can't find the container with id 0a7839983e7c0f03078d2c26f91e3a615a6a8333986e664d02a50dcdb74146c5 Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.122933 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.186941 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.217968 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9"] Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.228694 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b9ec9d_9998_465c_b62f_5c97d5913dd7.slice/crio-455d9674d7dca4e62d7ae59589cae7fcb384d305ce099d0ac85e422248977e87 WatchSource:0}: Error finding container 455d9674d7dca4e62d7ae59589cae7fcb384d305ce099d0ac85e422248977e87: Status 404 returned error can't find the container with id 455d9674d7dca4e62d7ae59589cae7fcb384d305ce099d0ac85e422248977e87 Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.258109 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb48ac92_5355_41a2_bdce_f70e47cb91d9.slice/crio-e001fe10a1375cae1e08dbf7e908614dfec058bbddda83a086adb5e042720bdd WatchSource:0}: Error finding container e001fe10a1375cae1e08dbf7e908614dfec058bbddda83a086adb5e042720bdd: Status 404 returned error can't find the container with id e001fe10a1375cae1e08dbf7e908614dfec058bbddda83a086adb5e042720bdd Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.262623 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-9c4lk"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.293852 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.313334 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.337699 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c"] Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.341947 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c052a4_bb18_4634_8acd_13d899dcc8af.slice/crio-c2bf52587b030155025614618d2889a0a0ccc523f9374a2f65826bfae2d338fa WatchSource:0}: Error finding container c2bf52587b030155025614618d2889a0a0ccc523f9374a2f65826bfae2d338fa: Status 404 returned error can't find the container with id c2bf52587b030155025614618d2889a0a0ccc523f9374a2f65826bfae2d338fa Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.354655 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx"] Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.363891 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80666255_494b_4c9a_8434_49c509505a32.slice/crio-07f86f9bc51347deaaaaef8dff11328c2dea4a198273a24cdb370c5f52b910eb WatchSource:0}: Error finding container 07f86f9bc51347deaaaaef8dff11328c2dea4a198273a24cdb370c5f52b910eb: Status 404 returned error can't find the container with id 07f86f9bc51347deaaaaef8dff11328c2dea4a198273a24cdb370c5f52b910eb Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.368444 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb"] Jan 27 14:28:22 crc kubenswrapper[4729]: W0127 14:28:22.371268 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1ef0def_8b43_404c_a20f_ccffb028796d.slice/crio-8b4f9d607a25803ec0de42f7d658b60c1cfc84c47905ba7b84e35a4db4e5dce4 WatchSource:0}: Error finding container 8b4f9d607a25803ec0de42f7d658b60c1cfc84c47905ba7b84e35a4db4e5dce4: Status 404 returned error can't find the container with id 8b4f9d607a25803ec0de42f7d658b60c1cfc84c47905ba7b84e35a4db4e5dce4 Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.383643 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59"] Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.388933 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l5h7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-w79q5_openstack-operators(73a5b611-6e78-44bf-94ad-2a1fdf4a4819): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.389192 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zh2gw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-7q6dx_openstack-operators(e1ef0def-8b43-404c-a20f-ccffb028796d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.390078 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wfjdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bfbgb_openstack-operators(80666255-494b-4c9a-8434-49c509505a32): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.390159 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" podUID="73a5b611-6e78-44bf-94ad-2a1fdf4a4819" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.390893 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" podUID="e1ef0def-8b43-404c-a20f-ccffb028796d" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.391980 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" podUID="80666255-494b-4c9a-8434-49c509505a32" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.390732 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.242:5001/openstack-k8s-operators/telemetry-operator:004841b4cdda91d8b709e945121ba0b3759ffc39,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zgcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-66f997549c-st8m2_openstack-operators(8eb4b08e-edb2-4db8-af1e-549e8e1396d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.398107 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" podUID="8eb4b08e-edb2-4db8-af1e-549e8e1396d1" Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.410945 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.428211 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.447684 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5"] Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.477066 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.477403 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:22 crc kubenswrapper[4729]: E0127 14:28:22.477511 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert podName:ef99dd2b-4274-4277-8517-c748ef232c38 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:26.477469524 +0000 UTC m=+1393.061660528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert") pod "infra-operator-controller-manager-694cf4f878-vm6sf" (UID: "ef99dd2b-4274-4277-8517-c748ef232c38") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:22 crc kubenswrapper[4729]: I0127 14:28:22.998144 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" event={"ID":"6e8131d6-585c-43f8-9231-204ef68de1ba","Type":"ContainerStarted","Data":"b5d51c5537c8c7d3ca4ad51451ac449236cf0aa5f765a750c798ec1be513dc7c"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.043761 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" event={"ID":"e1ef0def-8b43-404c-a20f-ccffb028796d","Type":"ContainerStarted","Data":"8b4f9d607a25803ec0de42f7d658b60c1cfc84c47905ba7b84e35a4db4e5dce4"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.046235 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" event={"ID":"e0d9910b-f1f9-4f1e-b920-dd1c3c787f78","Type":"ContainerStarted","Data":"5a80098a489cb58f5525f1f783bba2a8cc953fce5e6c52bccc29499793177ec4"} Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.077077 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" podUID="e1ef0def-8b43-404c-a20f-ccffb028796d" Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.087411 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" event={"ID":"eb48ac92-5355-41a2-bdce-f70e47cb91d9","Type":"ContainerStarted","Data":"e001fe10a1375cae1e08dbf7e908614dfec058bbddda83a086adb5e042720bdd"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.092513 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" event={"ID":"49b9ec9d-9998-465c-b62f-5c97d5913dd7","Type":"ContainerStarted","Data":"455d9674d7dca4e62d7ae59589cae7fcb384d305ce099d0ac85e422248977e87"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.094104 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" event={"ID":"60c59a6c-9eb5-4869-8d50-2cb234912d6b","Type":"ContainerStarted","Data":"9fa630784b75a411f55ac0793413be26b4dee847f79d4f34da82e3e9d8079236"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.095580 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" event={"ID":"7f77a2ce-03ee-4d74-a7df-052255e0f337","Type":"ContainerStarted","Data":"d646cdd17cb1bd1795c6680b8931f9127c2dc4c936b346e6c3c3f5c8cd562720"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.097079 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" event={"ID":"a393649b-f1a3-44fb-9cb8-a289fcc3f01f","Type":"ContainerStarted","Data":"7961eabf9eddeb6edfec2a6f697cf612462720467de4d97cbeaa6e2a15df2027"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.098131 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" event={"ID":"80666255-494b-4c9a-8434-49c509505a32","Type":"ContainerStarted","Data":"07f86f9bc51347deaaaaef8dff11328c2dea4a198273a24cdb370c5f52b910eb"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.102143 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" event={"ID":"d9c052a4-bb18-4634-8acd-13d899dcc8af","Type":"ContainerStarted","Data":"c2bf52587b030155025614618d2889a0a0ccc523f9374a2f65826bfae2d338fa"} Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.102528 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" podUID="80666255-494b-4c9a-8434-49c509505a32" Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.105450 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" event={"ID":"27edcc9a-7976-42bd-9e8b-a7c95936f305","Type":"ContainerStarted","Data":"2aaa5774beb26556038b64dbc30dc2506b127bcb7114a60b1aef7ee0d305408c"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.109661 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" event={"ID":"8eb4b08e-edb2-4db8-af1e-549e8e1396d1","Type":"ContainerStarted","Data":"76b1b3051ccf149b1a747dccfff9344cf56b715ea2ffbeabe18ae6ded69290cd"} Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.114603 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.242:5001/openstack-k8s-operators/telemetry-operator:004841b4cdda91d8b709e945121ba0b3759ffc39\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" podUID="8eb4b08e-edb2-4db8-af1e-549e8e1396d1" Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.131888 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" event={"ID":"73a5b611-6e78-44bf-94ad-2a1fdf4a4819","Type":"ContainerStarted","Data":"f45a98e2722c54fbc9d34aef528747befbf28e917ad1cd07520438c88ba07d61"} Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.135313 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" podUID="73a5b611-6e78-44bf-94ad-2a1fdf4a4819" Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.171080 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" event={"ID":"d5d5726b-1680-44de-9752-2e56e45a3d12","Type":"ContainerStarted","Data":"74507f976a3d5a738751a4dfce4bbec1e861e35b641b64d3e9f36da523767dee"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.193069 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" event={"ID":"53268481-b675-416f-a9d3-343d349e3bb4","Type":"ContainerStarted","Data":"0a7839983e7c0f03078d2c26f91e3a615a6a8333986e664d02a50dcdb74146c5"} Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.194418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.194594 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.194658 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert podName:5a91ffd3-fab5-40f6-b808-7d0fd80888aa nodeName:}" failed. No retries permitted until 2026-01-27 14:28:27.194642448 +0000 UTC m=+1393.778833442 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" (UID: "5a91ffd3-fab5-40f6-b808-7d0fd80888aa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.602017 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.602209 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.602500 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:27.602479932 +0000 UTC m=+1394.186670926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "metrics-server-cert" not found Jan 27 14:28:23 crc kubenswrapper[4729]: I0127 14:28:23.604426 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.604798 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:28:23 crc kubenswrapper[4729]: E0127 14:28:23.604839 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:27.604829246 +0000 UTC m=+1394.189020250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "webhook-server-cert" not found Jan 27 14:28:24 crc kubenswrapper[4729]: E0127 14:28:24.292507 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" podUID="80666255-494b-4c9a-8434-49c509505a32" Jan 27 14:28:24 crc kubenswrapper[4729]: E0127 14:28:24.293594 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" podUID="73a5b611-6e78-44bf-94ad-2a1fdf4a4819" Jan 27 14:28:24 crc kubenswrapper[4729]: E0127 14:28:24.293711 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" podUID="e1ef0def-8b43-404c-a20f-ccffb028796d" Jan 27 14:28:24 crc kubenswrapper[4729]: E0127 14:28:24.295709 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.242:5001/openstack-k8s-operators/telemetry-operator:004841b4cdda91d8b709e945121ba0b3759ffc39\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" podUID="8eb4b08e-edb2-4db8-af1e-549e8e1396d1" Jan 27 14:28:26 crc kubenswrapper[4729]: I0127 14:28:26.563710 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:26 crc kubenswrapper[4729]: E0127 14:28:26.564167 4729 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:26 crc kubenswrapper[4729]: E0127 14:28:26.564257 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert podName:ef99dd2b-4274-4277-8517-c748ef232c38 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:34.564237984 +0000 UTC m=+1401.148428988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert") pod "infra-operator-controller-manager-694cf4f878-vm6sf" (UID: "ef99dd2b-4274-4277-8517-c748ef232c38") : secret "infra-operator-webhook-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: I0127 14:28:27.275777 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.276075 4729 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.276131 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert podName:5a91ffd3-fab5-40f6-b808-7d0fd80888aa nodeName:}" failed. No retries permitted until 2026-01-27 14:28:35.276114474 +0000 UTC m=+1401.860305478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" (UID: "5a91ffd3-fab5-40f6-b808-7d0fd80888aa") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: I0127 14:28:27.683605 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:27 crc kubenswrapper[4729]: I0127 14:28:27.683729 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.683806 4729 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.683868 4729 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.684006 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:35.683989599 +0000 UTC m=+1402.268180603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "metrics-server-cert" not found Jan 27 14:28:27 crc kubenswrapper[4729]: E0127 14:28:27.684026 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs podName:4a222b58-d97f-4d40-9bb4-517b4798eb07 nodeName:}" failed. No retries permitted until 2026-01-27 14:28:35.68401983 +0000 UTC m=+1402.268210824 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs") pod "openstack-operator-controller-manager-858d4757d5-qn8zm" (UID: "4a222b58-d97f-4d40-9bb4-517b4798eb07") : secret "webhook-server-cert" not found Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.438293 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.438870 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c874n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-ck286_openstack-operators(c9cd8871-5d83-436f-b787-a8769327429d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.440051 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" podUID="c9cd8871-5d83-436f-b787-a8769327429d" Jan 27 14:28:34 crc kubenswrapper[4729]: I0127 14:28:34.608308 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:34 crc kubenswrapper[4729]: I0127 14:28:34.620019 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef99dd2b-4274-4277-8517-c748ef232c38-cert\") pod \"infra-operator-controller-manager-694cf4f878-vm6sf\" (UID: \"ef99dd2b-4274-4277-8517-c748ef232c38\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:34 crc kubenswrapper[4729]: I0127 14:28:34.635721 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.967868 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.968369 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d7hjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-fcvrz_openstack-operators(a27299b3-aeb1-4014-a145-6b5b908542fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:34 crc kubenswrapper[4729]: E0127 14:28:34.969681 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" podUID="a27299b3-aeb1-4014-a145-6b5b908542fc" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.327427 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.343116 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a91ffd3-fab5-40f6-b808-7d0fd80888aa-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7\" (UID: \"5a91ffd3-fab5-40f6-b808-7d0fd80888aa\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:35 crc kubenswrapper[4729]: E0127 14:28:35.378949 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" podUID="a27299b3-aeb1-4014-a145-6b5b908542fc" Jan 27 14:28:35 crc kubenswrapper[4729]: E0127 14:28:35.379066 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" podUID="c9cd8871-5d83-436f-b787-a8769327429d" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.595592 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.735262 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.735438 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.739999 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-metrics-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.747430 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a222b58-d97f-4d40-9bb4-517b4798eb07-webhook-certs\") pod \"openstack-operator-controller-manager-858d4757d5-qn8zm\" (UID: \"4a222b58-d97f-4d40-9bb4-517b4798eb07\") " pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:35 crc kubenswrapper[4729]: E0127 14:28:35.835725 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 27 14:28:35 crc kubenswrapper[4729]: E0127 14:28:35.835977 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mpwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-m4z7c_openstack-operators(60c59a6c-9eb5-4869-8d50-2cb234912d6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:35 crc kubenswrapper[4729]: E0127 14:28:35.838519 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" podUID="60c59a6c-9eb5-4869-8d50-2cb234912d6b" Jan 27 14:28:35 crc kubenswrapper[4729]: I0127 14:28:35.888530 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:36 crc kubenswrapper[4729]: E0127 14:28:36.389286 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" podUID="60c59a6c-9eb5-4869-8d50-2cb234912d6b" Jan 27 14:28:36 crc kubenswrapper[4729]: E0127 14:28:36.403402 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 27 14:28:36 crc kubenswrapper[4729]: E0127 14:28:36.403654 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqfmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-9c4lk_openstack-operators(7f77a2ce-03ee-4d74-a7df-052255e0f337): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:36 crc kubenswrapper[4729]: E0127 14:28:36.407066 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" podUID="7f77a2ce-03ee-4d74-a7df-052255e0f337" Jan 27 14:28:37 crc kubenswrapper[4729]: E0127 14:28:37.397937 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" podUID="7f77a2ce-03ee-4d74-a7df-052255e0f337" Jan 27 14:28:39 crc kubenswrapper[4729]: E0127 14:28:39.709030 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 27 14:28:39 crc kubenswrapper[4729]: E0127 14:28:39.709767 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t6lwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-hn29j_openstack-operators(e0d9910b-f1f9-4f1e-b920-dd1c3c787f78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:39 crc kubenswrapper[4729]: E0127 14:28:39.711053 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" podUID="e0d9910b-f1f9-4f1e-b920-dd1c3c787f78" Jan 27 14:28:40 crc kubenswrapper[4729]: E0127 14:28:40.420445 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" podUID="e0d9910b-f1f9-4f1e-b920-dd1c3c787f78" Jan 27 14:28:41 crc kubenswrapper[4729]: E0127 14:28:41.393140 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 27 14:28:41 crc kubenswrapper[4729]: E0127 14:28:41.393357 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ltjqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-22t59_openstack-operators(a393649b-f1a3-44fb-9cb8-a289fcc3f01f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:41 crc kubenswrapper[4729]: E0127 14:28:41.394727 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" podUID="a393649b-f1a3-44fb-9cb8-a289fcc3f01f" Jan 27 14:28:41 crc kubenswrapper[4729]: E0127 14:28:41.429799 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" podUID="a393649b-f1a3-44fb-9cb8-a289fcc3f01f" Jan 27 14:28:42 crc kubenswrapper[4729]: E0127 14:28:42.090550 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e" Jan 27 14:28:42 crc kubenswrapper[4729]: E0127 14:28:42.091070 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pkdpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-598f7747c9-rnnng_openstack-operators(53268481-b675-416f-a9d3-343d349e3bb4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:42 crc kubenswrapper[4729]: E0127 14:28:42.094970 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" podUID="53268481-b675-416f-a9d3-343d349e3bb4" Jan 27 14:28:42 crc kubenswrapper[4729]: E0127 14:28:42.443866 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:4d55bd6418df3f63f4d3fe47bebf3f5498a520b3e14af98fe16c85ef9fd54d5e\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" podUID="53268481-b675-416f-a9d3-343d349e3bb4" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.386354 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.386962 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bp7fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-h98cg_openstack-operators(49b9ec9d-9998-465c-b62f-5c97d5913dd7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.388142 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" podUID="49b9ec9d-9998-465c-b62f-5c97d5913dd7" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.462034 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" podUID="49b9ec9d-9998-465c-b62f-5c97d5913dd7" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.964863 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.965083 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntfwx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-7478f7dbf9-m7jfx_openstack-operators(27edcc9a-7976-42bd-9e8b-a7c95936f305): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:44 crc kubenswrapper[4729]: E0127 14:28:44.966229 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" podUID="27edcc9a-7976-42bd-9e8b-a7c95936f305" Jan 27 14:28:45 crc kubenswrapper[4729]: E0127 14:28:45.470030 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:b916c87806b7eadd83e0ca890c3c24fb990fc5beb48ddc4537e3384efd3e62f7\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" podUID="27edcc9a-7976-42bd-9e8b-a7c95936f305" Jan 27 14:28:45 crc kubenswrapper[4729]: E0127 14:28:45.550438 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 27 14:28:45 crc kubenswrapper[4729]: E0127 14:28:45.550622 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmnnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-mxmhp_openstack-operators(d5d5726b-1680-44de-9752-2e56e45a3d12): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:45 crc kubenswrapper[4729]: E0127 14:28:45.552011 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" podUID="d5d5726b-1680-44de-9752-2e56e45a3d12" Jan 27 14:28:46 crc kubenswrapper[4729]: E0127 14:28:46.477549 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" podUID="d5d5726b-1680-44de-9752-2e56e45a3d12" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.159479 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.159676 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8k5mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-dpnb5_openstack-operators(d9c052a4-bb18-4634-8acd-13d899dcc8af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.160757 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" podUID="d9c052a4-bb18-4634-8acd-13d899dcc8af" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.484774 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" podUID="d9c052a4-bb18-4634-8acd-13d899dcc8af" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.694803 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.694999 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vplf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-khqrr_openstack-operators(6e8131d6-585c-43f8-9231-204ef68de1ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:47 crc kubenswrapper[4729]: E0127 14:28:47.696216 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" podUID="6e8131d6-585c-43f8-9231-204ef68de1ba" Jan 27 14:28:48 crc kubenswrapper[4729]: E0127 14:28:48.338248 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 27 14:28:48 crc kubenswrapper[4729]: E0127 14:28:48.338643 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jq5sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-qlm8l_openstack-operators(64a73e98-23a2-4634-ba0f-fcf5389e38e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:28:48 crc kubenswrapper[4729]: E0127 14:28:48.340966 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" podUID="64a73e98-23a2-4634-ba0f-fcf5389e38e1" Jan 27 14:28:48 crc kubenswrapper[4729]: E0127 14:28:48.499432 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" podUID="64a73e98-23a2-4634-ba0f-fcf5389e38e1" Jan 27 14:28:48 crc kubenswrapper[4729]: E0127 14:28:48.499464 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" podUID="6e8131d6-585c-43f8-9231-204ef68de1ba" Jan 27 14:28:48 crc kubenswrapper[4729]: I0127 14:28:48.718153 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm"] Jan 27 14:28:51 crc kubenswrapper[4729]: I0127 14:28:51.321343 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7"] Jan 27 14:28:51 crc kubenswrapper[4729]: I0127 14:28:51.524096 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" event={"ID":"4a222b58-d97f-4d40-9bb4-517b4798eb07","Type":"ContainerStarted","Data":"657c9749084de50bbb126a50345d8eaac79bed9327db4adfd90421d95034f76f"} Jan 27 14:28:53 crc kubenswrapper[4729]: W0127 14:28:53.497858 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a91ffd3_fab5_40f6_b808_7d0fd80888aa.slice/crio-1354494bef394921de2df57238d4b4b934744a28e85660d8197629d73591d58e WatchSource:0}: Error finding container 1354494bef394921de2df57238d4b4b934744a28e85660d8197629d73591d58e: Status 404 returned error can't find the container with id 1354494bef394921de2df57238d4b4b934744a28e85660d8197629d73591d58e Jan 27 14:28:53 crc kubenswrapper[4729]: I0127 14:28:53.545528 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" event={"ID":"5a91ffd3-fab5-40f6-b808-7d0fd80888aa","Type":"ContainerStarted","Data":"1354494bef394921de2df57238d4b4b934744a28e85660d8197629d73591d58e"} Jan 27 14:28:53 crc kubenswrapper[4729]: I0127 14:28:53.938672 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf"] Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.566245 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" event={"ID":"e1ef0def-8b43-404c-a20f-ccffb028796d","Type":"ContainerStarted","Data":"0a298e768abb2e0fb2a4d50ea7db2992dcb7de1abca66d5af4a3c77c5ff81285"} Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.566790 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.571084 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" event={"ID":"6818e775-019d-4bda-94ba-b7e550c9a127","Type":"ContainerStarted","Data":"56e5b43c8ba51ae1aca320fa3e9bfb3165c04ca606297239ec5bfac83cf73b34"} Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.571216 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.573738 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" event={"ID":"4a222b58-d97f-4d40-9bb4-517b4798eb07","Type":"ContainerStarted","Data":"0fbc392aefb1f572a2d6dbe62b68f09ff2d137dab1c197a7a32e85272aeec146"} Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.574266 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.576050 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" event={"ID":"ef99dd2b-4274-4277-8517-c748ef232c38","Type":"ContainerStarted","Data":"f83a2731f222f0ab7019852527d76d9f827897091ad763f669225bcde8ec6e1d"} Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.620493 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" podStartSLOduration=9.218831557 podStartE2EDuration="36.620399432s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:20.875106165 +0000 UTC m=+1387.459297179" lastFinishedPulling="2026-01-27 14:28:48.27667403 +0000 UTC m=+1414.860865054" observedRunningTime="2026-01-27 14:28:54.617588326 +0000 UTC m=+1421.201779330" watchObservedRunningTime="2026-01-27 14:28:54.620399432 +0000 UTC m=+1421.204590446" Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.620605 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" podStartSLOduration=5.154838026 podStartE2EDuration="36.620599707s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.388915619 +0000 UTC m=+1388.973106623" lastFinishedPulling="2026-01-27 14:28:53.8546773 +0000 UTC m=+1420.438868304" observedRunningTime="2026-01-27 14:28:54.59598342 +0000 UTC m=+1421.180174444" watchObservedRunningTime="2026-01-27 14:28:54.620599707 +0000 UTC m=+1421.204790721" Jan 27 14:28:54 crc kubenswrapper[4729]: I0127 14:28:54.658500 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" podStartSLOduration=35.658470536 podStartE2EDuration="35.658470536s" podCreationTimestamp="2026-01-27 14:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:28:54.654396235 +0000 UTC m=+1421.238587279" watchObservedRunningTime="2026-01-27 14:28:54.658470536 +0000 UTC m=+1421.242661550" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.589262 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" event={"ID":"60c59a6c-9eb5-4869-8d50-2cb234912d6b","Type":"ContainerStarted","Data":"c22de9441db2923af07e0119e872047720d287c0e56f9beec8a60904c5934832"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.589531 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.590830 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" event={"ID":"c9cd8871-5d83-436f-b787-a8769327429d","Type":"ContainerStarted","Data":"da1fc39f5b0b901a750f04f66ce8741842057a4f3f7b6206f48cf7d0d2ea31c4"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.591335 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.592831 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" event={"ID":"73a5b611-6e78-44bf-94ad-2a1fdf4a4819","Type":"ContainerStarted","Data":"550a5b049f41f2a7976923c7df4c3ffc583df480a992601673b74228f6c89543"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.593458 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.616507 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" event={"ID":"7f77a2ce-03ee-4d74-a7df-052255e0f337","Type":"ContainerStarted","Data":"23c2a7d66bdd2478d4ab567af83416f69fb58a9ea44ef7583baf4f2be23efa18"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.616868 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.619439 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" event={"ID":"48e0b394-ae44-484e-821f-b821cd11c656","Type":"ContainerStarted","Data":"95df63cfbb932db99a8f31cce70a3f9c35c69e0128a76dc11e6dca3a387d1c42"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.619603 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.621989 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" event={"ID":"8eb4b08e-edb2-4db8-af1e-549e8e1396d1","Type":"ContainerStarted","Data":"1da99ff9fae4d23b855ef0c69daa1f55055df640aa4be00dcde45ac6d9b1ed7e"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.622168 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.625729 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" event={"ID":"eb48ac92-5355-41a2-bdce-f70e47cb91d9","Type":"ContainerStarted","Data":"775b20e90881f5f640ebb4f3420a500b74762479cdb56a38b65fd364a083b1cd"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.626860 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.632246 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" podStartSLOduration=4.539192881 podStartE2EDuration="37.632225687s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:20.761559642 +0000 UTC m=+1387.345750646" lastFinishedPulling="2026-01-27 14:28:53.854592448 +0000 UTC m=+1420.438783452" observedRunningTime="2026-01-27 14:28:55.627246001 +0000 UTC m=+1422.211437005" watchObservedRunningTime="2026-01-27 14:28:55.632225687 +0000 UTC m=+1422.216416691" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.634044 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" podStartSLOduration=6.036846896 podStartE2EDuration="37.634033276s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.303839739 +0000 UTC m=+1388.888030743" lastFinishedPulling="2026-01-27 14:28:53.901026119 +0000 UTC m=+1420.485217123" observedRunningTime="2026-01-27 14:28:55.614943908 +0000 UTC m=+1422.199134912" watchObservedRunningTime="2026-01-27 14:28:55.634033276 +0000 UTC m=+1422.218224280" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.650909 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" event={"ID":"a27299b3-aeb1-4014-a145-6b5b908542fc","Type":"ContainerStarted","Data":"84d8bcb3f50a1e1afcc4a36f50ab879cf3dec746b1504393d63eb2aa553b70e7"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.652153 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.662146 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" event={"ID":"80666255-494b-4c9a-8434-49c509505a32","Type":"ContainerStarted","Data":"eb8464ca23b9f99800d482b52a9fed021dcfa3a39b68776e7e69cd28ef996fdc"} Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.662631 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.664762 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" podStartSLOduration=6.130743775 podStartE2EDuration="37.664736949s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.388782066 +0000 UTC m=+1388.972973070" lastFinishedPulling="2026-01-27 14:28:53.92277523 +0000 UTC m=+1420.506966244" observedRunningTime="2026-01-27 14:28:55.660414852 +0000 UTC m=+1422.244605866" watchObservedRunningTime="2026-01-27 14:28:55.664736949 +0000 UTC m=+1422.248927963" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.711715 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" podStartSLOduration=6.20532292 podStartE2EDuration="37.711689965s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.390650756 +0000 UTC m=+1388.974841760" lastFinishedPulling="2026-01-27 14:28:53.897017801 +0000 UTC m=+1420.481208805" observedRunningTime="2026-01-27 14:28:55.689536293 +0000 UTC m=+1422.273727297" watchObservedRunningTime="2026-01-27 14:28:55.711689965 +0000 UTC m=+1422.295880969" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.719992 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" podStartSLOduration=4.671918664 podStartE2EDuration="37.7199658s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:20.874332223 +0000 UTC m=+1387.458523237" lastFinishedPulling="2026-01-27 14:28:53.922379369 +0000 UTC m=+1420.506570373" observedRunningTime="2026-01-27 14:28:55.711329024 +0000 UTC m=+1422.295520028" watchObservedRunningTime="2026-01-27 14:28:55.7199658 +0000 UTC m=+1422.304156804" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.777677 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" podStartSLOduration=5.065373211 podStartE2EDuration="36.777655256s" podCreationTimestamp="2026-01-27 14:28:19 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.266401603 +0000 UTC m=+1388.850592597" lastFinishedPulling="2026-01-27 14:28:53.978683638 +0000 UTC m=+1420.562874642" observedRunningTime="2026-01-27 14:28:55.768219979 +0000 UTC m=+1422.352410993" watchObservedRunningTime="2026-01-27 14:28:55.777655256 +0000 UTC m=+1422.361846260" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.802173 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" podStartSLOduration=10.336367531 podStartE2EDuration="37.802153201s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:20.810947152 +0000 UTC m=+1387.395138156" lastFinishedPulling="2026-01-27 14:28:48.276732822 +0000 UTC m=+1414.860923826" observedRunningTime="2026-01-27 14:28:55.801326119 +0000 UTC m=+1422.385517123" watchObservedRunningTime="2026-01-27 14:28:55.802153201 +0000 UTC m=+1422.386344205" Jan 27 14:28:55 crc kubenswrapper[4729]: I0127 14:28:55.835382 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" podStartSLOduration=11.830381699 podStartE2EDuration="37.835357322s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.271592484 +0000 UTC m=+1388.855783488" lastFinishedPulling="2026-01-27 14:28:48.276568107 +0000 UTC m=+1414.860759111" observedRunningTime="2026-01-27 14:28:55.817355304 +0000 UTC m=+1422.401546328" watchObservedRunningTime="2026-01-27 14:28:55.835357322 +0000 UTC m=+1422.419548336" Jan 27 14:28:56 crc kubenswrapper[4729]: I0127 14:28:56.074054 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" podStartSLOduration=6.566408876 podStartE2EDuration="38.074034553s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.389910767 +0000 UTC m=+1388.974101781" lastFinishedPulling="2026-01-27 14:28:53.897536454 +0000 UTC m=+1420.481727458" observedRunningTime="2026-01-27 14:28:55.842084585 +0000 UTC m=+1422.426275589" watchObservedRunningTime="2026-01-27 14:28:56.074034553 +0000 UTC m=+1422.658225557" Jan 27 14:28:59 crc kubenswrapper[4729]: I0127 14:28:59.247095 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bfbgb" Jan 27 14:28:59 crc kubenswrapper[4729]: I0127 14:28:59.444296 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-54vz9" Jan 27 14:28:59 crc kubenswrapper[4729]: I0127 14:28:59.499104 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-w79q5" Jan 27 14:28:59 crc kubenswrapper[4729]: I0127 14:28:59.666680 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-7q6dx" Jan 27 14:29:00 crc kubenswrapper[4729]: I0127 14:29:00.126203 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-66f997549c-st8m2" Jan 27 14:29:00 crc kubenswrapper[4729]: I0127 14:29:00.213045 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-m4z7c" Jan 27 14:29:00 crc kubenswrapper[4729]: I0127 14:29:00.266769 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-9c4lk" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.720337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" event={"ID":"53268481-b675-416f-a9d3-343d349e3bb4","Type":"ContainerStarted","Data":"6de5819f3f5ddb6cdf57a3c7000c12521079ef60df0f5573690a13ba0baf025a"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.724482 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" event={"ID":"49b9ec9d-9998-465c-b62f-5c97d5913dd7","Type":"ContainerStarted","Data":"9ff552622a5c6b42255231d74114837a67425d1d29db75ed27538c1f12f16c75"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.725619 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.728122 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" event={"ID":"ef99dd2b-4274-4277-8517-c748ef232c38","Type":"ContainerStarted","Data":"f61fe953dc2f4e2577369282a7366735518d1828ea6afa1e7e9406ee473e924f"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.729704 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" event={"ID":"5a91ffd3-fab5-40f6-b808-7d0fd80888aa","Type":"ContainerStarted","Data":"276c34201eb51e8e251df093aae356b02cdc490430b4d8a1f212f7f566333158"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.729896 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.740047 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" event={"ID":"a393649b-f1a3-44fb-9cb8-a289fcc3f01f","Type":"ContainerStarted","Data":"048993fcad7dba448d8e6415b126f48cf2fd769691bab4b82d2952cd2dfdf2a7"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.740737 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.743385 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" event={"ID":"d9c052a4-bb18-4634-8acd-13d899dcc8af","Type":"ContainerStarted","Data":"7954da4d3b7ead3c7aa35d5849f2df070eef073b1177f680055b304afe7d8f85"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.753166 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" event={"ID":"27edcc9a-7976-42bd-9e8b-a7c95936f305","Type":"ContainerStarted","Data":"377d3a37c269f0328a28bd3350b0443a5772be369e75bd40ccc448d7cb36a804"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.754053 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.756322 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" event={"ID":"e0d9910b-f1f9-4f1e-b920-dd1c3c787f78","Type":"ContainerStarted","Data":"70db7f5c262ea2a6ef995e8e9c0991f934ab7bcc4ba74010716ce3aa9a88dc8b"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.758145 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.758537 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" event={"ID":"d5d5726b-1680-44de-9752-2e56e45a3d12","Type":"ContainerStarted","Data":"355300707fcdfc74014546372ca9b7a68a8950b05ccd34bb41b76f6b75ae7706"} Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.758995 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" podStartSLOduration=4.966502223 podStartE2EDuration="44.758980412s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.25817183 +0000 UTC m=+1388.842362844" lastFinishedPulling="2026-01-27 14:29:02.050650019 +0000 UTC m=+1428.634841033" observedRunningTime="2026-01-27 14:29:02.754334376 +0000 UTC m=+1429.338525370" watchObservedRunningTime="2026-01-27 14:29:02.758980412 +0000 UTC m=+1429.343171426" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.759307 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.786465 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" podStartSLOduration=5.141608477 podStartE2EDuration="44.786441937s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.385630621 +0000 UTC m=+1388.969821625" lastFinishedPulling="2026-01-27 14:29:02.030464081 +0000 UTC m=+1428.614655085" observedRunningTime="2026-01-27 14:29:02.783115957 +0000 UTC m=+1429.367306961" watchObservedRunningTime="2026-01-27 14:29:02.786441937 +0000 UTC m=+1429.370632971" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.810920 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" podStartSLOduration=4.808138334 podStartE2EDuration="44.810904902s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.048595499 +0000 UTC m=+1388.632786503" lastFinishedPulling="2026-01-27 14:29:02.051362077 +0000 UTC m=+1428.635553071" observedRunningTime="2026-01-27 14:29:02.807101748 +0000 UTC m=+1429.391292772" watchObservedRunningTime="2026-01-27 14:29:02.810904902 +0000 UTC m=+1429.395095916" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.857210 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" podStartSLOduration=36.586449071 podStartE2EDuration="44.857188409s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:53.598751212 +0000 UTC m=+1420.182942216" lastFinishedPulling="2026-01-27 14:29:01.86949055 +0000 UTC m=+1428.453681554" observedRunningTime="2026-01-27 14:29:02.856274664 +0000 UTC m=+1429.440465688" watchObservedRunningTime="2026-01-27 14:29:02.857188409 +0000 UTC m=+1429.441379423" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.961349 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" podStartSLOduration=4.316104295 podStartE2EDuration="43.961333226s" podCreationTimestamp="2026-01-27 14:28:19 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.385944019 +0000 UTC m=+1388.970135013" lastFinishedPulling="2026-01-27 14:29:02.03117294 +0000 UTC m=+1428.615363944" observedRunningTime="2026-01-27 14:29:02.960665339 +0000 UTC m=+1429.544856343" watchObservedRunningTime="2026-01-27 14:29:02.961333226 +0000 UTC m=+1429.545524230" Jan 27 14:29:02 crc kubenswrapper[4729]: I0127 14:29:02.967162 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" podStartSLOduration=5.963464835 podStartE2EDuration="44.967139575s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.323111483 +0000 UTC m=+1388.907302487" lastFinishedPulling="2026-01-27 14:29:01.326786223 +0000 UTC m=+1427.910977227" observedRunningTime="2026-01-27 14:29:02.924070995 +0000 UTC m=+1429.508262019" watchObservedRunningTime="2026-01-27 14:29:02.967139575 +0000 UTC m=+1429.551330579" Jan 27 14:29:03 crc kubenswrapper[4729]: I0127 14:29:03.768477 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:29:03 crc kubenswrapper[4729]: I0127 14:29:03.797132 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" podStartSLOduration=37.713182695 podStartE2EDuration="45.797115711s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:53.968156542 +0000 UTC m=+1420.552347546" lastFinishedPulling="2026-01-27 14:29:02.052089548 +0000 UTC m=+1428.636280562" observedRunningTime="2026-01-27 14:29:03.791133909 +0000 UTC m=+1430.375324933" watchObservedRunningTime="2026-01-27 14:29:03.797115711 +0000 UTC m=+1430.381306715" Jan 27 14:29:03 crc kubenswrapper[4729]: I0127 14:29:03.818819 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" podStartSLOduration=5.9879941290000005 podStartE2EDuration="45.81880419s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.386337249 +0000 UTC m=+1388.970528253" lastFinishedPulling="2026-01-27 14:29:02.21714731 +0000 UTC m=+1428.801338314" observedRunningTime="2026-01-27 14:29:03.816261631 +0000 UTC m=+1430.400452645" watchObservedRunningTime="2026-01-27 14:29:03.81880419 +0000 UTC m=+1430.402995194" Jan 27 14:29:03 crc kubenswrapper[4729]: I0127 14:29:03.849171 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" podStartSLOduration=5.875795422 podStartE2EDuration="45.849151293s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.082716825 +0000 UTC m=+1388.666907829" lastFinishedPulling="2026-01-27 14:29:02.056072696 +0000 UTC m=+1428.640263700" observedRunningTime="2026-01-27 14:29:03.827629769 +0000 UTC m=+1430.411820773" watchObservedRunningTime="2026-01-27 14:29:03.849151293 +0000 UTC m=+1430.433342287" Jan 27 14:29:04 crc kubenswrapper[4729]: I0127 14:29:04.111988 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:29:04 crc kubenswrapper[4729]: I0127 14:29:04.778909 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" event={"ID":"6e8131d6-585c-43f8-9231-204ef68de1ba","Type":"ContainerStarted","Data":"43300d049e858fddc603339190af2167c64732ddf74103ab34feda7c253c0ad2"} Jan 27 14:29:04 crc kubenswrapper[4729]: I0127 14:29:04.781182 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" event={"ID":"64a73e98-23a2-4634-ba0f-fcf5389e38e1","Type":"ContainerStarted","Data":"8e724fb51e6a8fd763fbc3bb030f4a8edf6dc1fb88f77bb85e9696d35698fa0b"} Jan 27 14:29:04 crc kubenswrapper[4729]: I0127 14:29:04.781467 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:29:04 crc kubenswrapper[4729]: I0127 14:29:04.798982 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" podStartSLOduration=3.466647968 podStartE2EDuration="46.798959915s" podCreationTimestamp="2026-01-27 14:28:18 +0000 UTC" firstStartedPulling="2026-01-27 14:28:20.780829735 +0000 UTC m=+1387.365020739" lastFinishedPulling="2026-01-27 14:29:04.113141672 +0000 UTC m=+1430.697332686" observedRunningTime="2026-01-27 14:29:04.795990554 +0000 UTC m=+1431.380181578" watchObservedRunningTime="2026-01-27 14:29:04.798959915 +0000 UTC m=+1431.383150919" Jan 27 14:29:05 crc kubenswrapper[4729]: I0127 14:29:05.827840 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-khqrr" podStartSLOduration=4.379964791 podStartE2EDuration="46.827820992s" podCreationTimestamp="2026-01-27 14:28:19 +0000 UTC" firstStartedPulling="2026-01-27 14:28:22.139244361 +0000 UTC m=+1388.723435365" lastFinishedPulling="2026-01-27 14:29:04.587100572 +0000 UTC m=+1431.171291566" observedRunningTime="2026-01-27 14:29:05.820077921 +0000 UTC m=+1432.404268945" watchObservedRunningTime="2026-01-27 14:29:05.827820992 +0000 UTC m=+1432.412011996" Jan 27 14:29:05 crc kubenswrapper[4729]: I0127 14:29:05.895511 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-858d4757d5-qn8zm" Jan 27 14:29:08 crc kubenswrapper[4729]: I0127 14:29:08.777108 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-fcvrz" Jan 27 14:29:08 crc kubenswrapper[4729]: I0127 14:29:08.817048 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4bpwj" Jan 27 14:29:08 crc kubenswrapper[4729]: I0127 14:29:08.892849 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-ck286" Jan 27 14:29:08 crc kubenswrapper[4729]: I0127 14:29:08.917522 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-5csls" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.222504 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.225259 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-rnnng" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.300502 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-h98cg" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.341736 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-hn29j" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.704631 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.706820 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-dpnb5" Jan 27 14:29:09 crc kubenswrapper[4729]: I0127 14:29:09.753841 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-m7jfx" Jan 27 14:29:10 crc kubenswrapper[4729]: I0127 14:29:10.105897 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-22t59" Jan 27 14:29:10 crc kubenswrapper[4729]: I0127 14:29:10.229079 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-mxmhp" Jan 27 14:29:14 crc kubenswrapper[4729]: I0127 14:29:14.642954 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-vm6sf" Jan 27 14:29:15 crc kubenswrapper[4729]: I0127 14:29:15.602744 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7" Jan 27 14:29:18 crc kubenswrapper[4729]: I0127 14:29:18.965014 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qlm8l" Jan 27 14:29:22 crc kubenswrapper[4729]: I0127 14:29:22.655201 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:29:22 crc kubenswrapper[4729]: I0127 14:29:22.656190 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.130312 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.132153 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.137249 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.137317 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.137488 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-2lnmt" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.137560 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.156161 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.220385 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.220447 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c256\" (UniqueName: \"kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.240981 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.242728 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.256143 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.266115 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.322735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.322810 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c256\" (UniqueName: \"kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.322936 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll6v\" (UniqueName: \"kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.324037 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.329534 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.329945 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.362317 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c256\" (UniqueName: \"kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256\") pod \"dnsmasq-dns-675f4bcbfc-64hn6\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.431598 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.431739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qll6v\" (UniqueName: \"kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.431763 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.432483 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.432603 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.454363 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qll6v\" (UniqueName: \"kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v\") pod \"dnsmasq-dns-78dd6ddcc-n2srs\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.457393 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:29:35 crc kubenswrapper[4729]: I0127 14:29:35.561638 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:29:36 crc kubenswrapper[4729]: I0127 14:29:36.024104 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:29:36 crc kubenswrapper[4729]: W0127 14:29:36.026521 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56d88efe_226f_4632_b56f_33df1c2a1826.slice/crio-c239475812f892f6d01efe4d32f6e7d9137d627e9456252b4c971b6e7e64d0b3 WatchSource:0}: Error finding container c239475812f892f6d01efe4d32f6e7d9137d627e9456252b4c971b6e7e64d0b3: Status 404 returned error can't find the container with id c239475812f892f6d01efe4d32f6e7d9137d627e9456252b4c971b6e7e64d0b3 Jan 27 14:29:36 crc kubenswrapper[4729]: I0127 14:29:36.275536 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:29:36 crc kubenswrapper[4729]: W0127 14:29:36.280541 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7d82605_52c6_48c8_b914_d2b3788a4f60.slice/crio-57c4ea78a902cb2cad9d7b1470ea4b96a1b0789cb88c417d10769f6d0ce6d455 WatchSource:0}: Error finding container 57c4ea78a902cb2cad9d7b1470ea4b96a1b0789cb88c417d10769f6d0ce6d455: Status 404 returned error can't find the container with id 57c4ea78a902cb2cad9d7b1470ea4b96a1b0789cb88c417d10769f6d0ce6d455 Jan 27 14:29:37 crc kubenswrapper[4729]: I0127 14:29:37.032943 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" event={"ID":"56d88efe-226f-4632-b56f-33df1c2a1826","Type":"ContainerStarted","Data":"c239475812f892f6d01efe4d32f6e7d9137d627e9456252b4c971b6e7e64d0b3"} Jan 27 14:29:37 crc kubenswrapper[4729]: I0127 14:29:37.036642 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" event={"ID":"a7d82605-52c6-48c8-b914-d2b3788a4f60","Type":"ContainerStarted","Data":"57c4ea78a902cb2cad9d7b1470ea4b96a1b0789cb88c417d10769f6d0ce6d455"} Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.258090 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.320954 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.322854 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.348417 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.408076 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz9dm\" (UniqueName: \"kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.408152 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.408283 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.510739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz9dm\" (UniqueName: \"kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.510810 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.510921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.511838 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.512203 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.581227 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz9dm\" (UniqueName: \"kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm\") pod \"dnsmasq-dns-666b6646f7-qfcl6\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.650402 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.864482 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.936678 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.938755 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:38 crc kubenswrapper[4729]: I0127 14:29:38.986958 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.033153 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.033218 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrxx\" (UniqueName: \"kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.033312 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.135613 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.137451 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrxx\" (UniqueName: \"kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.137071 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.139988 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.140801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.182094 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrxx\" (UniqueName: \"kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx\") pod \"dnsmasq-dns-57d769cc4f-69svg\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.336073 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.483346 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.486034 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.489156 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.493152 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.493433 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.493596 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.493848 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-lnrwp" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.494085 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.494831 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.537129 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556484 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556541 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556564 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556599 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556620 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556661 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9f7w\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556696 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556715 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556748 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.556762 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.559671 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.589118 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.673997 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674254 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674285 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674323 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmsgt\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674350 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674747 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9f7w\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674790 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.674827 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.675725 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.676306 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.675851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677021 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677091 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677138 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677314 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677351 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677707 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.677810 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.678137 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.678177 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.678231 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.678566 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.691648 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.692109 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.692763 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.692943 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/49b1a718696a83970344de30fbccc78f0978419d3cdeeb51ab24f883887fa19b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.695614 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.701178 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.701462 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.702870 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.703844 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.724951 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.730230 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9f7w\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.730406 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.750410 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.764232 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.805331 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806087 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806190 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806273 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806349 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806380 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806449 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806550 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmsgt\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806578 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.806649 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.807897 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808008 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dgz4\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808077 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808125 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808158 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808172 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.808186 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.809163 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.809192 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.809720 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.809994 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810035 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810320 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810511 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810553 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810580 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.810606 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.811057 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.811479 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.813102 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.815556 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.815627 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/79fa808a8f7422c64123714813980a731a32ab42a568b8eb44235174099bb0b9/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.827019 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.831244 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.836480 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.845056 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.849736 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmsgt\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.902421 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914642 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dgz4\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914708 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914754 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914780 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914807 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.914989 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.915016 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.915080 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.915154 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.915197 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.915384 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.916226 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.916288 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.919030 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.921996 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.922281 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.923322 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.926793 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.926847 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9a4be69ff768d889fe3e43ff3b03318c02894e033bfc6af486c241199bf0c68/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.926921 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.955460 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dgz4\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.960592 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:29:39 crc kubenswrapper[4729]: I0127 14:29:39.967416 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.110673 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " pod="openstack/rabbitmq-server-1" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.122562 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.128479 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.128533 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" event={"ID":"b07f2206-b339-4038-be2f-d0e3301064e0","Type":"ContainerStarted","Data":"6388d643bba207c8d10ddbfeeb81fd32a85c6d4dfc810671a8267b24564da4dd"} Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.128647 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.137362 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.137595 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.137739 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.137965 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.138190 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.138968 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.142815 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7b4hd" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.168839 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.181632 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341043 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341152 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341214 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2llh\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341259 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341333 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341383 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341405 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341426 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341462 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341494 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.341541 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.447930 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448233 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448272 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2llh\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448298 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448346 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448382 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448399 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448413 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448436 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448459 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.448495 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.452447 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.455579 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.455864 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.456590 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.457955 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.463486 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.463534 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f096062f903b408b3eba2bc5650a80f546d059a652f497c12a0811100235309f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.468199 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.473568 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.473592 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.473668 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.480578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2llh\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.519265 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.811433 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.816364 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:29:40 crc kubenswrapper[4729]: I0127 14:29:40.850108 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.048305 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.064115 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.068676 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.082801 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-hlhrt" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.083299 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.083542 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.085712 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.099200 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166200 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-default\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166290 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166317 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kolla-config\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166345 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166571 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzb7h\" (UniqueName: \"kubernetes.io/projected/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kube-api-access-mzb7h\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166674 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166786 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.166898 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.171380 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.217996 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerStarted","Data":"2c63e8eaf8c96324b2b1ebb789413142b9d522fd5aa7ad5f2fb892b7c221cac7"} Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.232278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerStarted","Data":"5d25bc2430dee39a4493b24d644094ccf4647d046320b160a8841638589820ea"} Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.252036 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" event={"ID":"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c","Type":"ContainerStarted","Data":"3070c9a98045c27cb517fece414314e5cdae99dda4e93e7baf8ecf4e42cd631b"} Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.294622 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.294764 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.294890 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.295097 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-default\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.295166 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.295197 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kolla-config\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.295241 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.295355 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzb7h\" (UniqueName: \"kubernetes.io/projected/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kube-api-access-mzb7h\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.298475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.304593 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.305289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kolla-config\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.311085 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.311151 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fb3eae1a138b4c21928540254e85d6d551063e1849a300b1fd86d240c7905298/globalmount\"" pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.325751 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-config-data-default\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.331549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.334821 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzb7h\" (UniqueName: \"kubernetes.io/projected/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-kube-api-access-mzb7h\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.334993 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e387f91f-9a73-4c8b-8e0b-31ed4c3874ba-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.393942 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3c9c95d8-9b2d-4795-aab7-fb46e82b1d3d\") pod \"openstack-galera-0\" (UID: \"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba\") " pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.530479 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 27 14:29:41 crc kubenswrapper[4729]: I0127 14:29:41.685997 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:29:41 crc kubenswrapper[4729]: W0127 14:29:41.771479 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53 WatchSource:0}: Error finding container 3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53: Status 404 returned error can't find the container with id 3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53 Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.214098 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.279738 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.281817 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.289962 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.290096 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-svhx6" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.290392 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.290661 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.297488 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.302719 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerStarted","Data":"03c505022d9c0f4c9d0706cfa88bb39fa1f520eb68c15f12de5f98873885b1b9"} Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.328789 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerStarted","Data":"3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53"} Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.348325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba","Type":"ContainerStarted","Data":"debc90d8517da91d8be5aa4a8ee9da362d9a2f3ab0f91a89169e392e58475f9c"} Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434239 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434322 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434372 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kx8x\" (UniqueName: \"kubernetes.io/projected/37137af3-5865-4774-a6bc-4a96bb11a68d-kube-api-access-4kx8x\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434551 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434709 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.434956 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.435028 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.461776 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.463894 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.466319 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.466472 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.466621 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vv2sk" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.475817 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536566 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-kolla-config\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536675 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536702 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536741 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536809 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfv7\" (UniqueName: \"kubernetes.io/projected/041a96ab-9f21-4d02-80df-cf7d6a81323b-kube-api-access-gnfv7\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536848 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536906 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536934 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.536970 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-config-data\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.537003 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.537035 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kx8x\" (UniqueName: \"kubernetes.io/projected/37137af3-5865-4774-a6bc-4a96bb11a68d-kube-api-access-4kx8x\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.537120 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.537753 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.538886 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.543008 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.543348 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37137af3-5865-4774-a6bc-4a96bb11a68d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.552301 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.557067 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/37137af3-5865-4774-a6bc-4a96bb11a68d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.557214 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.557344 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b2b42885ce031948c50e2b3863e2f8af7fa8b13a60b188ee5e99cbcaf09cdba8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.564837 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kx8x\" (UniqueName: \"kubernetes.io/projected/37137af3-5865-4774-a6bc-4a96bb11a68d-kube-api-access-4kx8x\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.642366 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfv7\" (UniqueName: \"kubernetes.io/projected/041a96ab-9f21-4d02-80df-cf7d6a81323b-kube-api-access-gnfv7\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.642476 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-config-data\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.642579 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.642615 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-kolla-config\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.642640 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.643584 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-kolla-config\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.643862 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/041a96ab-9f21-4d02-80df-cf7d6a81323b-config-data\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.646201 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.650581 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/041a96ab-9f21-4d02-80df-cf7d6a81323b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.662690 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfv7\" (UniqueName: \"kubernetes.io/projected/041a96ab-9f21-4d02-80df-cf7d6a81323b-kube-api-access-gnfv7\") pod \"memcached-0\" (UID: \"041a96ab-9f21-4d02-80df-cf7d6a81323b\") " pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.689495 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abe7fe59-7ff6-4503-b945-0a28d5c9aa73\") pod \"openstack-cell1-galera-0\" (UID: \"37137af3-5865-4774-a6bc-4a96bb11a68d\") " pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.818904 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 27 14:29:42 crc kubenswrapper[4729]: I0127 14:29:42.955412 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 27 14:29:43 crc kubenswrapper[4729]: I0127 14:29:43.359781 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 27 14:29:43 crc kubenswrapper[4729]: W0127 14:29:43.441837 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041a96ab_9f21_4d02_80df_cf7d6a81323b.slice/crio-20b3e0c96187dc9a03c852c95c940df1d701d3acc7caf95a0bbe53c362c28252 WatchSource:0}: Error finding container 20b3e0c96187dc9a03c852c95c940df1d701d3acc7caf95a0bbe53c362c28252: Status 404 returned error can't find the container with id 20b3e0c96187dc9a03c852c95c940df1d701d3acc7caf95a0bbe53c362c28252 Jan 27 14:29:43 crc kubenswrapper[4729]: I0127 14:29:43.965637 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 27 14:29:43 crc kubenswrapper[4729]: W0127 14:29:43.970129 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37137af3_5865_4774_a6bc_4a96bb11a68d.slice/crio-b54ec85d6d6d278a646979eb162b6eaf1ccf14301aabd71b82031d9ad9b5dc17 WatchSource:0}: Error finding container b54ec85d6d6d278a646979eb162b6eaf1ccf14301aabd71b82031d9ad9b5dc17: Status 404 returned error can't find the container with id b54ec85d6d6d278a646979eb162b6eaf1ccf14301aabd71b82031d9ad9b5dc17 Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.310819 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.313956 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.318622 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9rmrz" Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.335251 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.391840 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"041a96ab-9f21-4d02-80df-cf7d6a81323b","Type":"ContainerStarted","Data":"20b3e0c96187dc9a03c852c95c940df1d701d3acc7caf95a0bbe53c362c28252"} Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.393629 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"37137af3-5865-4774-a6bc-4a96bb11a68d","Type":"ContainerStarted","Data":"b54ec85d6d6d278a646979eb162b6eaf1ccf14301aabd71b82031d9ad9b5dc17"} Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.406684 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvjh\" (UniqueName: \"kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh\") pod \"kube-state-metrics-0\" (UID: \"69bd8e93-2421-411f-ad18-0a92631e3345\") " pod="openstack/kube-state-metrics-0" Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.513085 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvjh\" (UniqueName: \"kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh\") pod \"kube-state-metrics-0\" (UID: \"69bd8e93-2421-411f-ad18-0a92631e3345\") " pod="openstack/kube-state-metrics-0" Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.556853 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rvjh\" (UniqueName: \"kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh\") pod \"kube-state-metrics-0\" (UID: \"69bd8e93-2421-411f-ad18-0a92631e3345\") " pod="openstack/kube-state-metrics-0" Jan 27 14:29:44 crc kubenswrapper[4729]: I0127 14:29:44.661009 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.453395 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.465696 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv"] Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.467616 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.477716 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-bf7j9" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.485689 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.490456 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv"] Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.539952 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.540049 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxwpp\" (UniqueName: \"kubernetes.io/projected/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-kube-api-access-rxwpp\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.642059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.642361 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxwpp\" (UniqueName: \"kubernetes.io/projected/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-kube-api-access-rxwpp\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: E0127 14:29:45.642257 4729 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 27 14:29:45 crc kubenswrapper[4729]: E0127 14:29:45.642484 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert podName:be65005b-48eb-45fe-b1e7-f5b5416fd8f3 nodeName:}" failed. No retries permitted until 2026-01-27 14:29:46.142453603 +0000 UTC m=+1472.726644667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert") pod "observability-ui-dashboards-66cbf594b5-m2dsv" (UID: "be65005b-48eb-45fe-b1e7-f5b5416fd8f3") : secret "observability-ui-dashboards" not found Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.684686 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxwpp\" (UniqueName: \"kubernetes.io/projected/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-kube-api-access-rxwpp\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.872438 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6964f4b777-tqx2x"] Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.874102 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.894710 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6964f4b777-tqx2x"] Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951212 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgx4\" (UniqueName: \"kubernetes.io/projected/2816d7ab-7539-4c89-b49b-f48dab2dca3a-kube-api-access-lkgx4\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951262 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-service-ca\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951355 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-oauth-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951395 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-trusted-ca-bundle\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951536 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:45 crc kubenswrapper[4729]: I0127 14:29:45.951575 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-oauth-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.059762 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgx4\" (UniqueName: \"kubernetes.io/projected/2816d7ab-7539-4c89-b49b-f48dab2dca3a-kube-api-access-lkgx4\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.059822 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-service-ca\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060001 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-oauth-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060051 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-trusted-ca-bundle\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060198 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060248 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-oauth-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.060944 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-service-ca\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.061062 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-oauth-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.061658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.061989 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2816d7ab-7539-4c89-b49b-f48dab2dca3a-trusted-ca-bundle\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.076953 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-serving-cert\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.076964 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2816d7ab-7539-4c89-b49b-f48dab2dca3a-console-oauth-config\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.086026 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgx4\" (UniqueName: \"kubernetes.io/projected/2816d7ab-7539-4c89-b49b-f48dab2dca3a-kube-api-access-lkgx4\") pod \"console-6964f4b777-tqx2x\" (UID: \"2816d7ab-7539-4c89-b49b-f48dab2dca3a\") " pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.169188 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.181801 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be65005b-48eb-45fe-b1e7-f5b5416fd8f3-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-m2dsv\" (UID: \"be65005b-48eb-45fe-b1e7-f5b5416fd8f3\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.207352 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.440254 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"69bd8e93-2421-411f-ad18-0a92631e3345","Type":"ContainerStarted","Data":"ef0db48f6e2af206d1598a828ed69320724b8f9c309b30bc5213f6ace67043ac"} Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.454304 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.623339 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.627967 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.641770 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.642216 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2dbph" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.642478 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.642629 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.642749 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.643205 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.643324 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.644335 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.692313 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696418 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696465 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696501 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696702 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696730 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696813 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696866 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696906 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696938 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.696975 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62qcf\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.799633 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.802284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.802560 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.803286 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.804289 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.805371 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.805679 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.806584 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62qcf\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.807664 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.807724 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.807776 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.808582 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.808958 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.809821 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.809868 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b21704c3f0d71a2fe30bdd41c3791c72542ca1dfc4c58962ce47cac47fe929c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.811329 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.811362 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.811558 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.812332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.812806 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.828052 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62qcf\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.872407 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:29:46 crc kubenswrapper[4729]: I0127 14:29:46.974858 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6964f4b777-tqx2x"] Jan 27 14:29:47 crc kubenswrapper[4729]: I0127 14:29:47.004222 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv"] Jan 27 14:29:47 crc kubenswrapper[4729]: I0127 14:29:47.087463 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.460092 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" event={"ID":"be65005b-48eb-45fe-b1e7-f5b5416fd8f3","Type":"ContainerStarted","Data":"7dbdddec4cfa4d3eed06233cf71ceed881fb8871773bb13883bb7017e17b2944"} Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.462632 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6964f4b777-tqx2x" event={"ID":"2816d7ab-7539-4c89-b49b-f48dab2dca3a","Type":"ContainerStarted","Data":"032ee646278e1b769167f6e76a11af448f8d5f6edb322a371a2c282787e57603"} Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.462664 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6964f4b777-tqx2x" event={"ID":"2816d7ab-7539-4c89-b49b-f48dab2dca3a","Type":"ContainerStarted","Data":"7195a7a86c4a668eeb3b786a080d84bc41ed236df150c73f862f255c098fbbd6"} Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.803506 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gk2cz"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.805540 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.814380 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gsqqc"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.815250 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qwnpn" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.816815 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.817215 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.817685 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.822907 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gk2cz"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.831365 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gsqqc"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.959572 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-run\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.959617 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.962974 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmskd\" (UniqueName: \"kubernetes.io/projected/aff22ed6-2491-4c78-94da-02f4b51493b8-kube-api-access-xmskd\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.963514 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff22ed6-2491-4c78-94da-02f4b51493b8-scripts\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.963733 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.963858 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8znf\" (UniqueName: \"kubernetes.io/projected/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-kube-api-access-d8znf\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.963964 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-scripts\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964031 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-etc-ovs\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-log\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964533 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-combined-ca-bundle\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964674 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-log-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964748 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-ovn-controller-tls-certs\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:47.964784 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-lib\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068047 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8znf\" (UniqueName: \"kubernetes.io/projected/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-kube-api-access-d8znf\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068116 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-scripts\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068149 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-etc-ovs\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068354 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-log\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068424 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-combined-ca-bundle\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068481 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-log-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.068966 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-ovn-controller-tls-certs\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069011 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-lib\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-run\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069126 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069256 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmskd\" (UniqueName: \"kubernetes.io/projected/aff22ed6-2491-4c78-94da-02f4b51493b8-kube-api-access-xmskd\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069294 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff22ed6-2491-4c78-94da-02f4b51493b8-scripts\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069317 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.069990 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.070254 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-log-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.072229 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-var-run-ovn\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.072261 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-run\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.072355 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-lib\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.072383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-etc-ovs\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.075153 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff22ed6-2491-4c78-94da-02f4b51493b8-scripts\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.077152 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aff22ed6-2491-4c78-94da-02f4b51493b8-var-log\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.089789 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-scripts\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.090522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8znf\" (UniqueName: \"kubernetes.io/projected/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-kube-api-access-d8znf\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.105058 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-ovn-controller-tls-certs\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.105520 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4b5a47-ff01-4fd6-b69f-4d70efc77a12-combined-ca-bundle\") pod \"ovn-controller-gk2cz\" (UID: \"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12\") " pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.112598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmskd\" (UniqueName: \"kubernetes.io/projected/aff22ed6-2491-4c78-94da-02f4b51493b8-kube-api-access-xmskd\") pod \"ovn-controller-ovs-gsqqc\" (UID: \"aff22ed6-2491-4c78-94da-02f4b51493b8\") " pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.135645 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.148778 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:48.502231 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6964f4b777-tqx2x" podStartSLOduration=3.502212644 podStartE2EDuration="3.502212644s" podCreationTimestamp="2026-01-27 14:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:29:48.49428956 +0000 UTC m=+1475.078480564" watchObservedRunningTime="2026-01-27 14:29:48.502212644 +0000 UTC m=+1475.086403648" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.262436 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.265296 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.273891 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.273962 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.274069 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kn4zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.274179 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.274436 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.279918 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.409310 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grfhq\" (UniqueName: \"kubernetes.io/projected/3290bc53-f838-4b2f-9f5a-053331751546-kube-api-access-grfhq\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.409367 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-config\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.409446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.409492 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.413718 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.413821 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.414333 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.414730 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3290bc53-f838-4b2f-9f5a-053331751546-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.473369 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.475916 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.479660 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.480065 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.480154 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.481642 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-d74tr" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.505985 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517087 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517292 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517447 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517667 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3290bc53-f838-4b2f-9f5a-053331751546-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517762 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grfhq\" (UniqueName: \"kubernetes.io/projected/3290bc53-f838-4b2f-9f5a-053331751546-kube-api-access-grfhq\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517798 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-config\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.517974 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.518997 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3290bc53-f838-4b2f-9f5a-053331751546-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.520324 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-config\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.522973 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3290bc53-f838-4b2f-9f5a-053331751546-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.531615 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.531901 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.532067 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.532109 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/583476c3d9133f3b602e008830194730580aab59bf2607dbe95c3ba176efe31b/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.538441 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3290bc53-f838-4b2f-9f5a-053331751546-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.588364 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-29b4b1c3-0714-478f-92c7-d99589ce7c04\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.616724 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grfhq\" (UniqueName: \"kubernetes.io/projected/3290bc53-f838-4b2f-9f5a-053331751546-kube-api-access-grfhq\") pod \"ovsdbserver-sb-0\" (UID: \"3290bc53-f838-4b2f-9f5a-053331751546\") " pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.619931 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37a67feb-a317-4a04-af97-028064ca39da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620011 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620077 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620139 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5nhs\" (UniqueName: \"kubernetes.io/projected/37a67feb-a317-4a04-af97-028064ca39da-kube-api-access-g5nhs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620182 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620276 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-config\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620315 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.620374 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722398 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5nhs\" (UniqueName: \"kubernetes.io/projected/37a67feb-a317-4a04-af97-028064ca39da-kube-api-access-g5nhs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722466 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722553 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-config\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722587 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722632 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722680 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37a67feb-a317-4a04-af97-028064ca39da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722715 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.722774 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.723922 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/37a67feb-a317-4a04-af97-028064ca39da-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.724144 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-config\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.724356 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/37a67feb-a317-4a04-af97-028064ca39da-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.724843 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.724895 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2a9d680893e8871aa8337ebfd3fd342993074003839b9e0f5d09366adfcaf04c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.729407 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.729970 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.730419 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/37a67feb-a317-4a04-af97-028064ca39da-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.745112 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5nhs\" (UniqueName: \"kubernetes.io/projected/37a67feb-a317-4a04-af97-028064ca39da-kube-api-access-g5nhs\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.765610 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ae495915-7e27-456d-80f2-9074c0a3ae02\") pod \"ovsdbserver-nb-0\" (UID: \"37a67feb-a317-4a04-af97-028064ca39da\") " pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.801583 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:51.907233 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:52.655140 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:52.656022 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:56.208309 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:56.208696 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:56.213082 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:56.571583 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6964f4b777-tqx2x" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:29:56.633130 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.168352 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.169970 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.172585 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.172801 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.180932 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.306682 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.307067 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vn6\" (UniqueName: \"kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.307169 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.409133 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.409218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vn6\" (UniqueName: \"kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.409297 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.410192 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.415674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.425490 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vn6\" (UniqueName: \"kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6\") pod \"collect-profiles-29492070-2z2zs\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:00.495400 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:11.977312 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:30:11 crc kubenswrapper[4729]: I0127 14:30:11.985092 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gk2cz"] Jan 27 14:30:13 crc kubenswrapper[4729]: E0127 14:30:13.010604 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 14:30:13 crc kubenswrapper[4729]: E0127 14:30:13.012127 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dgz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(190a5200-58b1-4ada-ab5f-47543de0795e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:13 crc kubenswrapper[4729]: E0127 14:30:13.013294 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" Jan 27 14:30:13 crc kubenswrapper[4729]: I0127 14:30:13.014475 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gsqqc"] Jan 27 14:30:13 crc kubenswrapper[4729]: E0127 14:30:13.730102 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" Jan 27 14:30:14 crc kubenswrapper[4729]: E0127 14:30:14.130350 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 27 14:30:14 crc kubenswrapper[4729]: E0127 14:30:14.130943 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n599h565h676h545h645h568h5b6h5cdh66fh654h585h5f8h78h5bdh56fh55bhb9h598hddh688hb8h56chd9h8dh86h697h597h698h655h55dh54h57bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnfv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(041a96ab-9f21-4d02-80df-cf7d6a81323b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:14 crc kubenswrapper[4729]: E0127 14:30:14.132658 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="041a96ab-9f21-4d02-80df-cf7d6a81323b" Jan 27 14:30:14 crc kubenswrapper[4729]: E0127 14:30:14.760132 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="041a96ab-9f21-4d02-80df-cf7d6a81323b" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.200007 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.200089 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.200479 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(e387f91f-9a73-4c8b-8e0b-31ed4c3874ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.200653 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmsgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(d148c837-c681-4446-9e81-195c19108d09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.201685 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="e387f91f-9a73-4c8b-8e0b-31ed4c3874ba" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.201778 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.773410 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="e387f91f-9a73-4c8b-8e0b-31ed4c3874ba" Jan 27 14:30:16 crc kubenswrapper[4729]: E0127 14:30:16.773452 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" Jan 27 14:30:18 crc kubenswrapper[4729]: W0127 14:30:18.576392 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadf6ced5_b7ed_4e10_b746_2f5ae329f4d8.slice/crio-85d93cbed5b412af967c03d1561e541b4bcf5394c261bdd7a46db5c90c2be680 WatchSource:0}: Error finding container 85d93cbed5b412af967c03d1561e541b4bcf5394c261bdd7a46db5c90c2be680: Status 404 returned error can't find the container with id 85d93cbed5b412af967c03d1561e541b4bcf5394c261bdd7a46db5c90c2be680 Jan 27 14:30:18 crc kubenswrapper[4729]: E0127 14:30:18.590834 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 27 14:30:18 crc kubenswrapper[4729]: E0127 14:30:18.591147 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2llh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(13cfdd20-ad90-472d-8962-6bec29b3fa74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:18 crc kubenswrapper[4729]: E0127 14:30:18.592244 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" Jan 27 14:30:18 crc kubenswrapper[4729]: I0127 14:30:18.789102 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz" event={"ID":"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12","Type":"ContainerStarted","Data":"a305c58f505811b3b5f8eda2f2b0698c81497e1d159f6b7089f3cf84059bfa06"} Jan 27 14:30:18 crc kubenswrapper[4729]: I0127 14:30:18.790598 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerStarted","Data":"85d93cbed5b412af967c03d1561e541b4bcf5394c261bdd7a46db5c90c2be680"} Jan 27 14:30:18 crc kubenswrapper[4729]: I0127 14:30:18.792849 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsqqc" event={"ID":"aff22ed6-2491-4c78-94da-02f4b51493b8","Type":"ContainerStarted","Data":"be6944f74b24968c6329d74742085ad7c831a38ef44b020d040b9e018383bfde"} Jan 27 14:30:18 crc kubenswrapper[4729]: E0127 14:30:18.795902 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" Jan 27 14:30:21 crc kubenswrapper[4729]: I0127 14:30:21.683158 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-779df67894-h7t9z" podUID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" containerName="console" containerID="cri-o://7839d7cbe95928236a0f448e10b3de55844205d847b17d72425d950f3ead868e" gracePeriod=15 Jan 27 14:30:21 crc kubenswrapper[4729]: I0127 14:30:21.818978 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-779df67894-h7t9z_adf1e407-1d97-4ef8-be0a-690eb2763b1d/console/0.log" Jan 27 14:30:21 crc kubenswrapper[4729]: I0127 14:30:21.819060 4729 generic.go:334] "Generic (PLEG): container finished" podID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" containerID="7839d7cbe95928236a0f448e10b3de55844205d847b17d72425d950f3ead868e" exitCode=2 Jan 27 14:30:21 crc kubenswrapper[4729]: I0127 14:30:21.819149 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-779df67894-h7t9z" event={"ID":"adf1e407-1d97-4ef8-be0a-690eb2763b1d","Type":"ContainerDied","Data":"7839d7cbe95928236a0f448e10b3de55844205d847b17d72425d950f3ead868e"} Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.654618 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.654718 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.654763 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.655531 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.655614 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" gracePeriod=600 Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.829984 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" exitCode=0 Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.830031 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898"} Jan 27 14:30:22 crc kubenswrapper[4729]: I0127 14:30:22.830104 4729 scope.go:117] "RemoveContainer" containerID="57151d4fd492cf8de80ac5d781a744532bc43e34fa60909ce879c487fc0325fe" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.482852 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.509786 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.509848 4729 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.510023 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9rvjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(69bd8e93-2421-411f-ad18-0a92631e3345): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.511289 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" Jan 27 14:30:23 crc kubenswrapper[4729]: I0127 14:30:23.842963 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.843770 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" Jan 27 14:30:23 crc kubenswrapper[4729]: E0127 14:30:23.843516 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.504559 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.504763 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz9dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-qfcl6_openstack(b07f2206-b339-4038-be2f-d0e3301064e0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.506379 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.756960 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.757196 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxwpp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-m2dsv_openshift-operators(be65005b-48eb-45fe-b1e7-f5b5416fd8f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.758525 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" podUID="be65005b-48eb-45fe-b1e7-f5b5416fd8f3" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.855771 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.855905 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.856191 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9c256,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-64hn6_openstack(a7d82605-52c6-48c8-b914-d2b3788a4f60): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.856306 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhrxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-69svg_openstack(7a9cbdd3-61dd-47fb-bc24-1e8bd689703c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.857441 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" podUID="a7d82605-52c6-48c8-b914-d2b3788a4f60" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.857503 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" Jan 27 14:30:25 crc kubenswrapper[4729]: I0127 14:30:25.874272 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-779df67894-h7t9z_adf1e407-1d97-4ef8-be0a-690eb2763b1d/console/0.log" Jan 27 14:30:25 crc kubenswrapper[4729]: I0127 14:30:25.875282 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-779df67894-h7t9z" event={"ID":"adf1e407-1d97-4ef8-be0a-690eb2763b1d","Type":"ContainerDied","Data":"0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c"} Jan 27 14:30:25 crc kubenswrapper[4729]: I0127 14:30:25.875316 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0452510f7077000b7fd125c5a219b020d82f70d5f05719a465024a8d6b7a5b5c" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.879220 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" podUID="be65005b-48eb-45fe-b1e7-f5b5416fd8f3" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.879352 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.885955 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.886066 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qll6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-n2srs_openstack(56d88efe-226f-4632-b56f-33df1c2a1826): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:30:25 crc kubenswrapper[4729]: E0127 14:30:25.888025 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" podUID="56d88efe-226f-4632-b56f-33df1c2a1826" Jan 27 14:30:25 crc kubenswrapper[4729]: I0127 14:30:25.918762 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-779df67894-h7t9z_adf1e407-1d97-4ef8-be0a-690eb2763b1d/console/0.log" Jan 27 14:30:25 crc kubenswrapper[4729]: I0127 14:30:25.918845 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023117 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4drhr\" (UniqueName: \"kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023278 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023373 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023397 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023436 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023501 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.023568 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert\") pod \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\" (UID: \"adf1e407-1d97-4ef8-be0a-690eb2763b1d\") " Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.024457 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.024471 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca" (OuterVolumeSpecName: "service-ca") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.024558 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.025110 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config" (OuterVolumeSpecName: "console-config") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.030314 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr" (OuterVolumeSpecName: "kube-api-access-4drhr") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "kube-api-access-4drhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.030603 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.047927 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "adf1e407-1d97-4ef8-be0a-690eb2763b1d" (UID: "adf1e407-1d97-4ef8-be0a-690eb2763b1d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.126902 4729 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.126935 4729 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.126944 4729 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.127012 4729 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.127022 4729 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/adf1e407-1d97-4ef8-be0a-690eb2763b1d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.127031 4729 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/adf1e407-1d97-4ef8-be0a-690eb2763b1d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.127039 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4drhr\" (UniqueName: \"kubernetes.io/projected/adf1e407-1d97-4ef8-be0a-690eb2763b1d-kube-api-access-4drhr\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.321197 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.338080 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs"] Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.882103 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-779df67894-h7t9z" Jan 27 14:30:26 crc kubenswrapper[4729]: E0127 14:30:26.885241 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.976502 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:30:26 crc kubenswrapper[4729]: I0127 14:30:26.987836 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-779df67894-h7t9z"] Jan 27 14:30:28 crc kubenswrapper[4729]: W0127 14:30:28.017044 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3290bc53_f838_4b2f_9f5a_053331751546.slice/crio-abc3f0a0442c9f6bfb5d61d6264abe9bc1a3054197d99ceebd53885b149f4244 WatchSource:0}: Error finding container abc3f0a0442c9f6bfb5d61d6264abe9bc1a3054197d99ceebd53885b149f4244: Status 404 returned error can't find the container with id abc3f0a0442c9f6bfb5d61d6264abe9bc1a3054197d99ceebd53885b149f4244 Jan 27 14:30:28 crc kubenswrapper[4729]: W0127 14:30:28.019843 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80e51691_71cb_4c26_971b_6eda98d0b95f.slice/crio-f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a WatchSource:0}: Error finding container f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a: Status 404 returned error can't find the container with id f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.067189 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" path="/var/lib/kubelet/pods/adf1e407-1d97-4ef8-be0a-690eb2763b1d/volumes" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.260298 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.280320 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qll6v\" (UniqueName: \"kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v\") pod \"56d88efe-226f-4632-b56f-33df1c2a1826\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.280367 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config\") pod \"56d88efe-226f-4632-b56f-33df1c2a1826\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.280387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc\") pod \"56d88efe-226f-4632-b56f-33df1c2a1826\" (UID: \"56d88efe-226f-4632-b56f-33df1c2a1826\") " Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.281540 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56d88efe-226f-4632-b56f-33df1c2a1826" (UID: "56d88efe-226f-4632-b56f-33df1c2a1826"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.282384 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config" (OuterVolumeSpecName: "config") pod "56d88efe-226f-4632-b56f-33df1c2a1826" (UID: "56d88efe-226f-4632-b56f-33df1c2a1826"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.285706 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v" (OuterVolumeSpecName: "kube-api-access-qll6v") pod "56d88efe-226f-4632-b56f-33df1c2a1826" (UID: "56d88efe-226f-4632-b56f-33df1c2a1826"). InnerVolumeSpecName "kube-api-access-qll6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.382403 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qll6v\" (UniqueName: \"kubernetes.io/projected/56d88efe-226f-4632-b56f-33df1c2a1826-kube-api-access-qll6v\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.382444 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.382453 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56d88efe-226f-4632-b56f-33df1c2a1826-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.527932 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.911014 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3290bc53-f838-4b2f-9f5a-053331751546","Type":"ContainerStarted","Data":"abc3f0a0442c9f6bfb5d61d6264abe9bc1a3054197d99ceebd53885b149f4244"} Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.912665 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.912662 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-n2srs" event={"ID":"56d88efe-226f-4632-b56f-33df1c2a1826","Type":"ContainerDied","Data":"c239475812f892f6d01efe4d32f6e7d9137d627e9456252b4c971b6e7e64d0b3"} Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.913851 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" event={"ID":"80e51691-71cb-4c26-971b-6eda98d0b95f","Type":"ContainerStarted","Data":"f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a"} Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.968187 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:30:28 crc kubenswrapper[4729]: I0127 14:30:28.983932 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-n2srs"] Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.283359 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.301691 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config\") pod \"a7d82605-52c6-48c8-b914-d2b3788a4f60\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.301901 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c256\" (UniqueName: \"kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256\") pod \"a7d82605-52c6-48c8-b914-d2b3788a4f60\" (UID: \"a7d82605-52c6-48c8-b914-d2b3788a4f60\") " Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.302205 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config" (OuterVolumeSpecName: "config") pod "a7d82605-52c6-48c8-b914-d2b3788a4f60" (UID: "a7d82605-52c6-48c8-b914-d2b3788a4f60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.302709 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d82605-52c6-48c8-b914-d2b3788a4f60-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.307179 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256" (OuterVolumeSpecName: "kube-api-access-9c256") pod "a7d82605-52c6-48c8-b914-d2b3788a4f60" (UID: "a7d82605-52c6-48c8-b914-d2b3788a4f60"). InnerVolumeSpecName "kube-api-access-9c256". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.404885 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c256\" (UniqueName: \"kubernetes.io/projected/a7d82605-52c6-48c8-b914-d2b3788a4f60-kube-api-access-9c256\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.942165 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"37a67feb-a317-4a04-af97-028064ca39da","Type":"ContainerStarted","Data":"cd659a77d1c7fef531760be689c3e98a66f41a18a8795abab131a4fea775c6c3"} Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.944084 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"37137af3-5865-4774-a6bc-4a96bb11a68d","Type":"ContainerStarted","Data":"d099f621bb702b9f6aa589a587a0cefd21d4762cce3676f2facc62be92c52798"} Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.947102 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" event={"ID":"a7d82605-52c6-48c8-b914-d2b3788a4f60","Type":"ContainerDied","Data":"57c4ea78a902cb2cad9d7b1470ea4b96a1b0789cb88c417d10769f6d0ce6d455"} Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.947138 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-64hn6" Jan 27 14:30:29 crc kubenswrapper[4729]: I0127 14:30:29.958901 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" event={"ID":"80e51691-71cb-4c26-971b-6eda98d0b95f","Type":"ContainerStarted","Data":"b81250f3ae57881d41110f6a33c22bba78f9b776c44170849e3a1487282d4142"} Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.044984 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.088125 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d88efe-226f-4632-b56f-33df1c2a1826" path="/var/lib/kubelet/pods/56d88efe-226f-4632-b56f-33df1c2a1826/volumes" Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.088626 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-64hn6"] Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.971242 4729 generic.go:334] "Generic (PLEG): container finished" podID="80e51691-71cb-4c26-971b-6eda98d0b95f" containerID="b81250f3ae57881d41110f6a33c22bba78f9b776c44170849e3a1487282d4142" exitCode=0 Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.971305 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" event={"ID":"80e51691-71cb-4c26-971b-6eda98d0b95f","Type":"ContainerDied","Data":"b81250f3ae57881d41110f6a33c22bba78f9b776c44170849e3a1487282d4142"} Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.975056 4729 generic.go:334] "Generic (PLEG): container finished" podID="aff22ed6-2491-4c78-94da-02f4b51493b8" containerID="89a8128b197d2412b517252589f9484cc707c5a170c2e7ac1c324b24c53e7839" exitCode=0 Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.975128 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsqqc" event={"ID":"aff22ed6-2491-4c78-94da-02f4b51493b8","Type":"ContainerDied","Data":"89a8128b197d2412b517252589f9484cc707c5a170c2e7ac1c324b24c53e7839"} Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.982185 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerStarted","Data":"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e"} Jan 27 14:30:30 crc kubenswrapper[4729]: I0127 14:30:30.985017 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerStarted","Data":"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742"} Jan 27 14:30:31 crc kubenswrapper[4729]: I0127 14:30:31.995290 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:31 crc kubenswrapper[4729]: I0127 14:30:31.999853 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" event={"ID":"80e51691-71cb-4c26-971b-6eda98d0b95f","Type":"ContainerDied","Data":"f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a"} Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:31.999925 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4a6665b3b4a39a1610fe81ac6086fee32ebb2a12dc091823e8e08ff71c7295a" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.002266 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba","Type":"ContainerStarted","Data":"7ca401ef2bf776f67a1322d6f90844ad6f09312eb211a7e68e7537793866effc"} Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.004525 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"041a96ab-9f21-4d02-80df-cf7d6a81323b","Type":"ContainerStarted","Data":"75ab211b2de8485ca8d4d9e85349dfa389f8961a58ee0849c0acd34710e3c84b"} Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.004924 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.006388 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz" event={"ID":"5e4b5a47-ff01-4fd6-b69f-4d70efc77a12","Type":"ContainerStarted","Data":"861ea922717f1a8567eefad22dfd976668588d654a60a2652c3e1a43aa8bafde"} Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.006502 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gk2cz" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.010188 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3290bc53-f838-4b2f-9f5a-053331751546","Type":"ContainerStarted","Data":"8388cfc616221b9a28904d7bbd07abc482f3202f1cd4555a1601516e7b6e082f"} Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.050396 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.051447725 podStartE2EDuration="50.050372985s" podCreationTimestamp="2026-01-27 14:29:42 +0000 UTC" firstStartedPulling="2026-01-27 14:29:43.444551402 +0000 UTC m=+1470.028742406" lastFinishedPulling="2026-01-27 14:30:30.443476662 +0000 UTC m=+1517.027667666" observedRunningTime="2026-01-27 14:30:32.042410529 +0000 UTC m=+1518.626601533" watchObservedRunningTime="2026-01-27 14:30:32.050372985 +0000 UTC m=+1518.634563989" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.067407 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume\") pod \"80e51691-71cb-4c26-971b-6eda98d0b95f\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.067456 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vn6\" (UniqueName: \"kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6\") pod \"80e51691-71cb-4c26-971b-6eda98d0b95f\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.067521 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume\") pod \"80e51691-71cb-4c26-971b-6eda98d0b95f\" (UID: \"80e51691-71cb-4c26-971b-6eda98d0b95f\") " Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.070853 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume" (OuterVolumeSpecName: "config-volume") pod "80e51691-71cb-4c26-971b-6eda98d0b95f" (UID: "80e51691-71cb-4c26-971b-6eda98d0b95f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.074262 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7d82605-52c6-48c8-b914-d2b3788a4f60" path="/var/lib/kubelet/pods/a7d82605-52c6-48c8-b914-d2b3788a4f60/volumes" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.089782 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gk2cz" podStartSLOduration=34.150593759 podStartE2EDuration="45.089760775s" podCreationTimestamp="2026-01-27 14:29:47 +0000 UTC" firstStartedPulling="2026-01-27 14:30:18.59072321 +0000 UTC m=+1505.174914214" lastFinishedPulling="2026-01-27 14:30:29.529890216 +0000 UTC m=+1516.114081230" observedRunningTime="2026-01-27 14:30:32.088212103 +0000 UTC m=+1518.672403137" watchObservedRunningTime="2026-01-27 14:30:32.089760775 +0000 UTC m=+1518.673951779" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.134584 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "80e51691-71cb-4c26-971b-6eda98d0b95f" (UID: "80e51691-71cb-4c26-971b-6eda98d0b95f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.169971 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/80e51691-71cb-4c26-971b-6eda98d0b95f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.169998 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80e51691-71cb-4c26-971b-6eda98d0b95f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.231698 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6" (OuterVolumeSpecName: "kube-api-access-r7vn6") pod "80e51691-71cb-4c26-971b-6eda98d0b95f" (UID: "80e51691-71cb-4c26-971b-6eda98d0b95f"). InnerVolumeSpecName "kube-api-access-r7vn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:32 crc kubenswrapper[4729]: I0127 14:30:32.272465 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7vn6\" (UniqueName: \"kubernetes.io/projected/80e51691-71cb-4c26-971b-6eda98d0b95f-kube-api-access-r7vn6\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.020687 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsqqc" event={"ID":"aff22ed6-2491-4c78-94da-02f4b51493b8","Type":"ContainerStarted","Data":"6cce8fe93e99294dff32ad99d6e27636f48802586c0c569ec37dd81a749684d6"} Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.021300 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gsqqc" event={"ID":"aff22ed6-2491-4c78-94da-02f4b51493b8","Type":"ContainerStarted","Data":"c525333c0a6a65b0ff70325551830b8173c4da8d2dc5ab11c494f57f4ee0f057"} Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.021323 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.023063 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"37a67feb-a317-4a04-af97-028064ca39da","Type":"ContainerStarted","Data":"4e89e2b5c2d5e48c21ba06e6c5e68358376c87d62a23b13a2c8453fc4089b57d"} Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.025969 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerStarted","Data":"e66d6a69fb86ffc46d6cd6f55bc26898855d203774774300de53a1b55587a1e2"} Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.026033 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs" Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.072835 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gsqqc" podStartSLOduration=35.414722994 podStartE2EDuration="46.072788787s" podCreationTimestamp="2026-01-27 14:29:47 +0000 UTC" firstStartedPulling="2026-01-27 14:30:18.592174829 +0000 UTC m=+1505.176365833" lastFinishedPulling="2026-01-27 14:30:29.250240622 +0000 UTC m=+1515.834431626" observedRunningTime="2026-01-27 14:30:33.057393549 +0000 UTC m=+1519.641584563" watchObservedRunningTime="2026-01-27 14:30:33.072788787 +0000 UTC m=+1519.656979801" Jan 27 14:30:33 crc kubenswrapper[4729]: I0127 14:30:33.150031 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:30:35 crc kubenswrapper[4729]: I0127 14:30:35.051975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerStarted","Data":"920267262e426a5814f7dcc9824fbb8d062ed5b8d587b97e2c1394ef8c992b51"} Jan 27 14:30:37 crc kubenswrapper[4729]: I0127 14:30:37.051428 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:30:37 crc kubenswrapper[4729]: E0127 14:30:37.052061 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:30:37 crc kubenswrapper[4729]: I0127 14:30:37.821200 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 27 14:30:39 crc kubenswrapper[4729]: I0127 14:30:39.102339 4729 generic.go:334] "Generic (PLEG): container finished" podID="37137af3-5865-4774-a6bc-4a96bb11a68d" containerID="d099f621bb702b9f6aa589a587a0cefd21d4762cce3676f2facc62be92c52798" exitCode=0 Jan 27 14:30:39 crc kubenswrapper[4729]: I0127 14:30:39.102957 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"37137af3-5865-4774-a6bc-4a96bb11a68d","Type":"ContainerDied","Data":"d099f621bb702b9f6aa589a587a0cefd21d4762cce3676f2facc62be92c52798"} Jan 27 14:30:39 crc kubenswrapper[4729]: I0127 14:30:39.107430 4729 generic.go:334] "Generic (PLEG): container finished" podID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerID="e66d6a69fb86ffc46d6cd6f55bc26898855d203774774300de53a1b55587a1e2" exitCode=0 Jan 27 14:30:39 crc kubenswrapper[4729]: I0127 14:30:39.107484 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerDied","Data":"e66d6a69fb86ffc46d6cd6f55bc26898855d203774774300de53a1b55587a1e2"} Jan 27 14:30:43 crc kubenswrapper[4729]: I0127 14:30:43.149362 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerStarted","Data":"1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd"} Jan 27 14:30:43 crc kubenswrapper[4729]: I0127 14:30:43.154855 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"37137af3-5865-4774-a6bc-4a96bb11a68d","Type":"ContainerStarted","Data":"03369489cda0cfed8406b2e9d88ae63d2963c2a01581c02a10c928beeb319dc6"} Jan 27 14:30:43 crc kubenswrapper[4729]: I0127 14:30:43.203616 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.973097793 podStartE2EDuration="1m2.203593194s" podCreationTimestamp="2026-01-27 14:29:41 +0000 UTC" firstStartedPulling="2026-01-27 14:29:43.978146511 +0000 UTC m=+1470.562337515" lastFinishedPulling="2026-01-27 14:30:29.208641912 +0000 UTC m=+1515.792832916" observedRunningTime="2026-01-27 14:30:43.198977538 +0000 UTC m=+1529.783168562" watchObservedRunningTime="2026-01-27 14:30:43.203593194 +0000 UTC m=+1529.787784198" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.164820 4729 generic.go:334] "Generic (PLEG): container finished" podID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerID="127aa1201f17a442cd9fe2ba21a21c49c6b00f07d05e2f805ca052344b128930" exitCode=0 Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.164899 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" event={"ID":"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c","Type":"ContainerDied","Data":"127aa1201f17a442cd9fe2ba21a21c49c6b00f07d05e2f805ca052344b128930"} Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.167551 4729 generic.go:334] "Generic (PLEG): container finished" podID="b07f2206-b339-4038-be2f-d0e3301064e0" containerID="1060bdf4fe2799f1353504426ff6b1de15fc817b88df76f7d2864fc2a7d16203" exitCode=0 Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.167646 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" event={"ID":"b07f2206-b339-4038-be2f-d0e3301064e0","Type":"ContainerDied","Data":"1060bdf4fe2799f1353504426ff6b1de15fc817b88df76f7d2864fc2a7d16203"} Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.170512 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"3290bc53-f838-4b2f-9f5a-053331751546","Type":"ContainerStarted","Data":"0fdc7477bb696e0cff198bc4301378a409c1a10c44858d4fcf4581b169e858d2"} Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.179107 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"37a67feb-a317-4a04-af97-028064ca39da","Type":"ContainerStarted","Data":"548c645124f196e44ac17a2b6fbcbda9ff007357304d5961faa175d889ab0835"} Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.207800 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=39.38721242 podStartE2EDuration="54.20778235s" podCreationTimestamp="2026-01-27 14:29:50 +0000 UTC" firstStartedPulling="2026-01-27 14:30:28.134371542 +0000 UTC m=+1514.718562556" lastFinishedPulling="2026-01-27 14:30:42.954941482 +0000 UTC m=+1529.539132486" observedRunningTime="2026-01-27 14:30:44.203130593 +0000 UTC m=+1530.787321607" watchObservedRunningTime="2026-01-27 14:30:44.20778235 +0000 UTC m=+1530.791973354" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.227767 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=40.375025153 podStartE2EDuration="54.227751742s" podCreationTimestamp="2026-01-27 14:29:50 +0000 UTC" firstStartedPulling="2026-01-27 14:30:29.218317715 +0000 UTC m=+1515.802508719" lastFinishedPulling="2026-01-27 14:30:43.071044304 +0000 UTC m=+1529.655235308" observedRunningTime="2026-01-27 14:30:44.220901866 +0000 UTC m=+1530.805092870" watchObservedRunningTime="2026-01-27 14:30:44.227751742 +0000 UTC m=+1530.811942756" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.580739 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.623748 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:44 crc kubenswrapper[4729]: E0127 14:30:44.624202 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" containerName="console" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.624220 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" containerName="console" Jan 27 14:30:44 crc kubenswrapper[4729]: E0127 14:30:44.624252 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e51691-71cb-4c26-971b-6eda98d0b95f" containerName="collect-profiles" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.624259 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e51691-71cb-4c26-971b-6eda98d0b95f" containerName="collect-profiles" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.624430 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e51691-71cb-4c26-971b-6eda98d0b95f" containerName="collect-profiles" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.624454 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf1e407-1d97-4ef8-be0a-690eb2763b1d" containerName="console" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.625436 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.641520 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.704750 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.704853 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sfs6\" (UniqueName: \"kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.704909 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.807068 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.807235 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sfs6\" (UniqueName: \"kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.807275 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.814738 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.825044 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.830804 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sfs6\" (UniqueName: \"kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6\") pod \"dnsmasq-dns-7cb5889db5-j2ktx\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:44 crc kubenswrapper[4729]: I0127 14:30:44.959854 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.751972 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.758189 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.758214 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.762126 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.762281 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.762712 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-st85r" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.762767 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.804193 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838222 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-lock\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838497 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838609 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7sk\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-kube-api-access-pj7sk\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838776 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838815 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.838851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-cache\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.857604 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.907470 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.940340 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7sk\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-kube-api-access-pj7sk\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.940437 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.940460 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.940483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-cache\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.940526 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-lock\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.941361 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: E0127 14:30:45.941484 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:45 crc kubenswrapper[4729]: E0127 14:30:45.941511 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:45 crc kubenswrapper[4729]: E0127 14:30:45.941566 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:46.441546647 +0000 UTC m=+1533.025737721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.942178 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-cache\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.942564 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-lock\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.969243 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7sk\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-kube-api-access-pj7sk\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:45 crc kubenswrapper[4729]: I0127 14:30:45.971209 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.069628 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.069668 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a435f415ecc1c2de174a652a75d7d00dc0f6fcc7a73ffe10de88510a6214e820/globalmount\"" pod="openstack/swift-storage-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.112191 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.203461 4729 generic.go:334] "Generic (PLEG): container finished" podID="e387f91f-9a73-4c8b-8e0b-31ed4c3874ba" containerID="7ca401ef2bf776f67a1322d6f90844ad6f09312eb211a7e68e7537793866effc" exitCode=0 Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.203550 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba","Type":"ContainerDied","Data":"7ca401ef2bf776f67a1322d6f90844ad6f09312eb211a7e68e7537793866effc"} Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.204747 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.204783 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.205906 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5c06142c-4718-4ad5-b836-ac33317d2f6e\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.247610 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.248828 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.452733 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:46 crc kubenswrapper[4729]: E0127 14:30:46.452964 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:46 crc kubenswrapper[4729]: E0127 14:30:46.452978 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:46 crc kubenswrapper[4729]: E0127 14:30:46.453022 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:47.453008704 +0000 UTC m=+1534.037199708 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.532959 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.581248 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.583787 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.589792 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.606847 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.683964 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662gm\" (UniqueName: \"kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.684186 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.684235 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.684261 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.684340 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-ngktk"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.685594 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.689341 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.712518 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ngktk"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792366 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-662gm\" (UniqueName: \"kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792456 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfr7\" (UniqueName: \"kubernetes.io/projected/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-kube-api-access-kqfr7\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792495 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792520 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovn-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792616 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-config\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792648 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792681 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792746 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.792822 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovs-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.793222 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-combined-ca-bundle\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.793548 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.794079 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.795034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.817962 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.855625 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-662gm\" (UniqueName: \"kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm\") pod \"dnsmasq-dns-57d65f699f-w269z\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.856192 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.859213 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.865396 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.896032 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897166 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfr7\" (UniqueName: \"kubernetes.io/projected/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-kube-api-access-kqfr7\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897215 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897244 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovn-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897286 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-config\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897368 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovs-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.897426 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-combined-ca-bundle\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.898374 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovn-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.901600 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-config\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.901691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-ovs-rundir\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.909953 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.921618 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.923339 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.924522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-combined-ca-bundle\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.927040 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.932896 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5w97h" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.933150 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.935841 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.936834 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.938865 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfr7\" (UniqueName: \"kubernetes.io/projected/0eab35a0-e5dd-4c49-9d7b-9f8f0722e754-kube-api-access-kqfr7\") pod \"ovn-controller-metrics-ngktk\" (UID: \"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754\") " pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:46 crc kubenswrapper[4729]: I0127 14:30:46.986079 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000064 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000147 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-scripts\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000242 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwncp\" (UniqueName: \"kubernetes.io/projected/6c1da68b-399e-4543-918f-6deed78e3626-kube-api-access-bwncp\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000299 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9sm\" (UniqueName: \"kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000328 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000353 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000374 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000474 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c1da68b-399e-4543-918f-6deed78e3626-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000511 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-config\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000538 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.000564 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.020553 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-ngktk" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.102944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c1da68b-399e-4543-918f-6deed78e3626-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103352 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-config\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103382 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103411 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103462 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103502 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-scripts\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103630 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwncp\" (UniqueName: \"kubernetes.io/projected/6c1da68b-399e-4543-918f-6deed78e3626-kube-api-access-bwncp\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103663 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103688 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9sm\" (UniqueName: \"kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103718 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103742 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.103763 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.106091 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6c1da68b-399e-4543-918f-6deed78e3626-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.106931 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-config\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.107634 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.108367 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.109090 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c1da68b-399e-4543-918f-6deed78e3626-scripts\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.109678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.109688 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.110787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.119038 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.129603 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1da68b-399e-4543-918f-6deed78e3626-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.134704 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwncp\" (UniqueName: \"kubernetes.io/projected/6c1da68b-399e-4543-918f-6deed78e3626-kube-api-access-bwncp\") pod \"ovn-northd-0\" (UID: \"6c1da68b-399e-4543-918f-6deed78e3626\") " pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.141095 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9sm\" (UniqueName: \"kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm\") pod \"dnsmasq-dns-b8fbc5445-4f2nb\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.235557 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.241244 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" event={"ID":"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c","Type":"ContainerStarted","Data":"93c4ecbb2fc5e9a98d8f033d7f23ffbb72e4e49c436ae8ec109fca8db0b8e1b0"} Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.242177 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="dnsmasq-dns" containerID="cri-o://93c4ecbb2fc5e9a98d8f033d7f23ffbb72e4e49c436ae8ec109fca8db0b8e1b0" gracePeriod=10 Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.242480 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.244584 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.304123 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" podStartSLOduration=6.477978081 podStartE2EDuration="1m9.304098004s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:40.130021071 +0000 UTC m=+1466.714212075" lastFinishedPulling="2026-01-27 14:30:42.956140994 +0000 UTC m=+1529.540331998" observedRunningTime="2026-01-27 14:30:47.268845647 +0000 UTC m=+1533.853036651" watchObservedRunningTime="2026-01-27 14:30:47.304098004 +0000 UTC m=+1533.888289018" Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.337394 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:47 crc kubenswrapper[4729]: W0127 14:30:47.379921 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc814f616_ec2d_45bf_9f63_04e825e85942.slice/crio-83fe44f7bfcbbb4bbd64f74cf4c75041c55cc6d7c24b2b64b8ebb38f66d40957 WatchSource:0}: Error finding container 83fe44f7bfcbbb4bbd64f74cf4c75041c55cc6d7c24b2b64b8ebb38f66d40957: Status 404 returned error can't find the container with id 83fe44f7bfcbbb4bbd64f74cf4c75041c55cc6d7c24b2b64b8ebb38f66d40957 Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.515811 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:47 crc kubenswrapper[4729]: E0127 14:30:47.516140 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:47 crc kubenswrapper[4729]: E0127 14:30:47.516164 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:47 crc kubenswrapper[4729]: E0127 14:30:47.516219 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:49.516196794 +0000 UTC m=+1536.100387798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.607552 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:30:47 crc kubenswrapper[4729]: W0127 14:30:47.621632 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f97a04_f193_482e_a7c9_819fb926008f.slice/crio-177e76ac38bf8d1904ca8bbe6b37735ee5e53ce3a44ff0e0208a694e30dc0c3e WatchSource:0}: Error finding container 177e76ac38bf8d1904ca8bbe6b37735ee5e53ce3a44ff0e0208a694e30dc0c3e: Status 404 returned error can't find the container with id 177e76ac38bf8d1904ca8bbe6b37735ee5e53ce3a44ff0e0208a694e30dc0c3e Jan 27 14:30:47 crc kubenswrapper[4729]: I0127 14:30:47.761151 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-ngktk"] Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.016137 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.190221 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 27 14:30:48 crc kubenswrapper[4729]: W0127 14:30:48.217725 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c1da68b_399e_4543_918f_6deed78e3626.slice/crio-0ee67bf4f7420c3d0adcb197798376cf50e8a0c89a920f96b9b371f9c5c53da0 WatchSource:0}: Error finding container 0ee67bf4f7420c3d0adcb197798376cf50e8a0c89a920f96b9b371f9c5c53da0: Status 404 returned error can't find the container with id 0ee67bf4f7420c3d0adcb197798376cf50e8a0c89a920f96b9b371f9c5c53da0 Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.256139 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" event={"ID":"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3","Type":"ContainerStarted","Data":"4cac18262c773354b2f75ebfc67a66a86883aa2e962e8b5b18a7911113fe5b51"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.261461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"69bd8e93-2421-411f-ad18-0a92631e3345","Type":"ContainerStarted","Data":"5ac0e25c32c0ec47b5fe249423023e939778c9859012f75e6aa2e868488277b1"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.261700 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.271137 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" event={"ID":"b07f2206-b339-4038-be2f-d0e3301064e0","Type":"ContainerStarted","Data":"7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.271259 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.271231 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="dnsmasq-dns" containerID="cri-o://7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f" gracePeriod=10 Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.274521 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c1da68b-399e-4543-918f-6deed78e3626","Type":"ContainerStarted","Data":"0ee67bf4f7420c3d0adcb197798376cf50e8a0c89a920f96b9b371f9c5c53da0"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.276179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" event={"ID":"be65005b-48eb-45fe-b1e7-f5b5416fd8f3","Type":"ContainerStarted","Data":"bd6a4c39b2bae15eb0ccab522d92fe15a7c1545b405a77024f83c12615446126"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.283515 4729 generic.go:334] "Generic (PLEG): container finished" podID="c814f616-ec2d-45bf-9f63-04e825e85942" containerID="72475453719cdb712e20a9b439107936af647791a03204bc829a996b9ac769bf" exitCode=0 Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.283588 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" event={"ID":"c814f616-ec2d-45bf-9f63-04e825e85942","Type":"ContainerDied","Data":"72475453719cdb712e20a9b439107936af647791a03204bc829a996b9ac769bf"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.284201 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" event={"ID":"c814f616-ec2d-45bf-9f63-04e825e85942","Type":"ContainerStarted","Data":"83fe44f7bfcbbb4bbd64f74cf4c75041c55cc6d7c24b2b64b8ebb38f66d40957"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.286045 4729 generic.go:334] "Generic (PLEG): container finished" podID="48f97a04-f193-482e-a7c9-819fb926008f" containerID="60802f82cf6fee0e9ee3a7b3381b4043a7f2c0711a44219f3adb78d283e28b39" exitCode=0 Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.286163 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-w269z" event={"ID":"48f97a04-f193-482e-a7c9-819fb926008f","Type":"ContainerDied","Data":"60802f82cf6fee0e9ee3a7b3381b4043a7f2c0711a44219f3adb78d283e28b39"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.286598 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-w269z" event={"ID":"48f97a04-f193-482e-a7c9-819fb926008f","Type":"ContainerStarted","Data":"177e76ac38bf8d1904ca8bbe6b37735ee5e53ce3a44ff0e0208a694e30dc0c3e"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.287382 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ngktk" event={"ID":"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754","Type":"ContainerStarted","Data":"b1f60b956b9780ba70a14ab91c035bc3333990200800bbf7c17fe04eeb473c06"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.288509 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e387f91f-9a73-4c8b-8e0b-31ed4c3874ba","Type":"ContainerStarted","Data":"a22ed86f0ba9c33a186b902d51d52b63315a642b6d1a062e29fbc82cfa42dcd4"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.294359 4729 generic.go:334] "Generic (PLEG): container finished" podID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerID="93c4ecbb2fc5e9a98d8f033d7f23ffbb72e4e49c436ae8ec109fca8db0b8e1b0" exitCode=0 Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.294445 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" event={"ID":"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c","Type":"ContainerDied","Data":"93c4ecbb2fc5e9a98d8f033d7f23ffbb72e4e49c436ae8ec109fca8db0b8e1b0"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.294515 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" event={"ID":"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c","Type":"ContainerDied","Data":"3070c9a98045c27cb517fece414314e5cdae99dda4e93e7baf8ecf4e42cd631b"} Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.294534 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3070c9a98045c27cb517fece414314e5cdae99dda4e93e7baf8ecf4e42cd631b" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.316568 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.776364761 podStartE2EDuration="1m4.316544765s" podCreationTimestamp="2026-01-27 14:29:44 +0000 UTC" firstStartedPulling="2026-01-27 14:29:45.511476736 +0000 UTC m=+1472.095667740" lastFinishedPulling="2026-01-27 14:30:47.05165674 +0000 UTC m=+1533.635847744" observedRunningTime="2026-01-27 14:30:48.280184978 +0000 UTC m=+1534.864376002" watchObservedRunningTime="2026-01-27 14:30:48.316544765 +0000 UTC m=+1534.900735759" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.317233 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-m2dsv" podStartSLOduration=5.58507036 podStartE2EDuration="1m3.317227274s" podCreationTimestamp="2026-01-27 14:29:45 +0000 UTC" firstStartedPulling="2026-01-27 14:29:47.029808904 +0000 UTC m=+1473.613999908" lastFinishedPulling="2026-01-27 14:30:44.761965818 +0000 UTC m=+1531.346156822" observedRunningTime="2026-01-27 14:30:48.307765397 +0000 UTC m=+1534.891956421" watchObservedRunningTime="2026-01-27 14:30:48.317227274 +0000 UTC m=+1534.901418278" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.348107 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" podStartSLOduration=7.076855652 podStartE2EDuration="1m10.348087572s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:39.684818052 +0000 UTC m=+1466.269009056" lastFinishedPulling="2026-01-27 14:30:42.956049972 +0000 UTC m=+1529.540240976" observedRunningTime="2026-01-27 14:30:48.33510598 +0000 UTC m=+1534.919297004" watchObservedRunningTime="2026-01-27 14:30:48.348087572 +0000 UTC m=+1534.932278566" Jan 27 14:30:48 crc kubenswrapper[4729]: E0127 14:30:48.367012 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f97a04_f193_482e_a7c9_819fb926008f.slice/crio-60802f82cf6fee0e9ee3a7b3381b4043a7f2c0711a44219f3adb78d283e28b39.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07f2206_b339_4038_be2f_d0e3301064e0.slice/crio-conmon-7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.371746 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371967.483051 podStartE2EDuration="1m9.371725294s" podCreationTimestamp="2026-01-27 14:29:39 +0000 UTC" firstStartedPulling="2026-01-27 14:29:42.247926059 +0000 UTC m=+1468.832117053" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:48.360915911 +0000 UTC m=+1534.945106925" watchObservedRunningTime="2026-01-27 14:30:48.371725294 +0000 UTC m=+1534.955916298" Jan 27 14:30:48 crc kubenswrapper[4729]: E0127 14:30:48.371237 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f97a04_f193_482e_a7c9_819fb926008f.slice/crio-60802f82cf6fee0e9ee3a7b3381b4043a7f2c0711a44219f3adb78d283e28b39.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.492696 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.643419 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config\") pod \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.643923 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc\") pod \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.643994 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhrxx\" (UniqueName: \"kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx\") pod \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\" (UID: \"7a9cbdd3-61dd-47fb-bc24-1e8bd689703c\") " Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.649757 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx" (OuterVolumeSpecName: "kube-api-access-fhrxx") pod "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" (UID: "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c"). InnerVolumeSpecName "kube-api-access-fhrxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.729395 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config" (OuterVolumeSpecName: "config") pod "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" (UID: "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.741184 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" (UID: "7a9cbdd3-61dd-47fb-bc24-1e8bd689703c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.747492 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.747753 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhrxx\" (UniqueName: \"kubernetes.io/projected/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-kube-api-access-fhrxx\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:48 crc kubenswrapper[4729]: I0127 14:30:48.747887 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.308712 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-w269z" event={"ID":"48f97a04-f193-482e-a7c9-819fb926008f","Type":"ContainerStarted","Data":"8ac129e8e4f56ce5caddb3d023c0ff07d4ddea68ddae39f42b1ed4178e9f3128"} Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.309172 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.314093 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-ngktk" event={"ID":"0eab35a0-e5dd-4c49-9d7b-9f8f0722e754","Type":"ContainerStarted","Data":"7625b4965b400ad0f3989237e59094607a0aee76b1f4dfc94206bbb21b0a468b"} Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.319094 4729 generic.go:334] "Generic (PLEG): container finished" podID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerID="5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0" exitCode=0 Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.319369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" event={"ID":"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3","Type":"ContainerDied","Data":"5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0"} Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.325476 4729 generic.go:334] "Generic (PLEG): container finished" podID="b07f2206-b339-4038-be2f-d0e3301064e0" containerID="7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f" exitCode=0 Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.325586 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-69svg" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.329422 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" event={"ID":"b07f2206-b339-4038-be2f-d0e3301064e0","Type":"ContainerDied","Data":"7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f"} Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.341991 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d65f699f-w269z" podStartSLOduration=3.34196843 podStartE2EDuration="3.34196843s" podCreationTimestamp="2026-01-27 14:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:49.333060317 +0000 UTC m=+1535.917251351" watchObservedRunningTime="2026-01-27 14:30:49.34196843 +0000 UTC m=+1535.926159434" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.387377 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-ngktk" podStartSLOduration=3.387345671 podStartE2EDuration="3.387345671s" podCreationTimestamp="2026-01-27 14:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:49.372307363 +0000 UTC m=+1535.956498367" watchObservedRunningTime="2026-01-27 14:30:49.387345671 +0000 UTC m=+1535.971536695" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.426541 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.438566 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-69svg"] Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.583750 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.584168 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.584213 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.584298 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:30:53.584276779 +0000 UTC m=+1540.168467773 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.695412 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tlnnx"] Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.695929 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="init" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.695952 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="init" Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.695979 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="dnsmasq-dns" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.695988 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="dnsmasq-dns" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.696214 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" containerName="dnsmasq-dns" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.697120 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.702285 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.702501 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.702520 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.724414 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tlnnx"] Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.763322 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tlnnx"] Jan 27 14:30:49 crc kubenswrapper[4729]: E0127 14:30:49.764476 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-8g4x6 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-tlnnx" podUID="99a3372e-12c9-434c-9def-f5d614ef049c" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788463 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788518 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788607 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788669 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788712 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788820 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4x6\" (UniqueName: \"kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.788896 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.792048 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-z4znt"] Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.793783 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.825115 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z4znt"] Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890679 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890759 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890827 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890906 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.890990 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891018 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g4x6\" (UniqueName: \"kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891075 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891124 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891150 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891202 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdp6\" (UniqueName: \"kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891280 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891305 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.891758 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.892244 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.892504 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.897418 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.899299 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.910800 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.926580 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g4x6\" (UniqueName: \"kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6\") pod \"swift-ring-rebalance-tlnnx\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993661 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993730 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdp6\" (UniqueName: \"kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993828 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993854 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993938 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993964 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.993989 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.995406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.995629 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.995716 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.998166 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:49 crc kubenswrapper[4729]: I0127 14:30:49.999104 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.010578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.012119 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdp6\" (UniqueName: \"kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6\") pod \"swift-ring-rebalance-z4znt\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.073047 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9cbdd3-61dd-47fb-bc24-1e8bd689703c" path="/var/lib/kubelet/pods/7a9cbdd3-61dd-47fb-bc24-1e8bd689703c/volumes" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.119147 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.339599 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.361194 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.506751 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g4x6\" (UniqueName: \"kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.506865 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.506932 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.506993 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507131 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507270 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507328 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf\") pod \"99a3372e-12c9-434c-9def-f5d614ef049c\" (UID: \"99a3372e-12c9-434c-9def-f5d614ef049c\") " Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507448 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507812 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507970 4729 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/99a3372e-12c9-434c-9def-f5d614ef049c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.507988 4729 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.509332 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts" (OuterVolumeSpecName: "scripts") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.512159 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.512766 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.512906 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.513513 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6" (OuterVolumeSpecName: "kube-api-access-8g4x6") pod "99a3372e-12c9-434c-9def-f5d614ef049c" (UID: "99a3372e-12c9-434c-9def-f5d614ef049c"). InnerVolumeSpecName "kube-api-access-8g4x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.610855 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.610944 4729 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.610986 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g4x6\" (UniqueName: \"kubernetes.io/projected/99a3372e-12c9-434c-9def-f5d614ef049c-kube-api-access-8g4x6\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.610996 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99a3372e-12c9-434c-9def-f5d614ef049c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:50 crc kubenswrapper[4729]: I0127 14:30:50.611006 4729 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/99a3372e-12c9-434c-9def-f5d614ef049c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.051504 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:30:51 crc kubenswrapper[4729]: E0127 14:30:51.052139 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.350512 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tlnnx" Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.421718 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-tlnnx"] Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.434256 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-tlnnx"] Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.531928 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 27 14:30:51 crc kubenswrapper[4729]: I0127 14:30:51.531978 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 27 14:30:52 crc kubenswrapper[4729]: I0127 14:30:52.064639 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a3372e-12c9-434c-9def-f5d614ef049c" path="/var/lib/kubelet/pods/99a3372e-12c9-434c-9def-f5d614ef049c/volumes" Jan 27 14:30:52 crc kubenswrapper[4729]: I0127 14:30:52.956809 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:52 crc kubenswrapper[4729]: I0127 14:30:52.957261 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:53 crc kubenswrapper[4729]: I0127 14:30:53.086640 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:53 crc kubenswrapper[4729]: I0127 14:30:53.448768 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 27 14:30:53 crc kubenswrapper[4729]: I0127 14:30:53.679643 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:30:53 crc kubenswrapper[4729]: E0127 14:30:53.679853 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:30:53 crc kubenswrapper[4729]: E0127 14:30:53.680143 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:30:53 crc kubenswrapper[4729]: E0127 14:30:53.680192 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:01.680175786 +0000 UTC m=+1548.264366790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:30:54 crc kubenswrapper[4729]: I0127 14:30:54.666708 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.155528 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.167949 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.253870 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sfs6\" (UniqueName: \"kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6\") pod \"c814f616-ec2d-45bf-9f63-04e825e85942\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.253957 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc\") pod \"b07f2206-b339-4038-be2f-d0e3301064e0\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.253982 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config\") pod \"b07f2206-b339-4038-be2f-d0e3301064e0\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.254302 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc\") pod \"c814f616-ec2d-45bf-9f63-04e825e85942\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.254346 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz9dm\" (UniqueName: \"kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm\") pod \"b07f2206-b339-4038-be2f-d0e3301064e0\" (UID: \"b07f2206-b339-4038-be2f-d0e3301064e0\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.254385 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config\") pod \"c814f616-ec2d-45bf-9f63-04e825e85942\" (UID: \"c814f616-ec2d-45bf-9f63-04e825e85942\") " Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.261229 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm" (OuterVolumeSpecName: "kube-api-access-xz9dm") pod "b07f2206-b339-4038-be2f-d0e3301064e0" (UID: "b07f2206-b339-4038-be2f-d0e3301064e0"). InnerVolumeSpecName "kube-api-access-xz9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.262274 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6" (OuterVolumeSpecName: "kube-api-access-6sfs6") pod "c814f616-ec2d-45bf-9f63-04e825e85942" (UID: "c814f616-ec2d-45bf-9f63-04e825e85942"). InnerVolumeSpecName "kube-api-access-6sfs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.305156 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config" (OuterVolumeSpecName: "config") pod "c814f616-ec2d-45bf-9f63-04e825e85942" (UID: "c814f616-ec2d-45bf-9f63-04e825e85942"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.310290 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config" (OuterVolumeSpecName: "config") pod "b07f2206-b339-4038-be2f-d0e3301064e0" (UID: "b07f2206-b339-4038-be2f-d0e3301064e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.326790 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b07f2206-b339-4038-be2f-d0e3301064e0" (UID: "b07f2206-b339-4038-be2f-d0e3301064e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.358253 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.358302 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sfs6\" (UniqueName: \"kubernetes.io/projected/c814f616-ec2d-45bf-9f63-04e825e85942-kube-api-access-6sfs6\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.358321 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.358339 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07f2206-b339-4038-be2f-d0e3301064e0-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.358351 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz9dm\" (UniqueName: \"kubernetes.io/projected/b07f2206-b339-4038-be2f-d0e3301064e0-kube-api-access-xz9dm\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.385240 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c814f616-ec2d-45bf-9f63-04e825e85942" (UID: "c814f616-ec2d-45bf-9f63-04e825e85942"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.395952 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.396012 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" event={"ID":"b07f2206-b339-4038-be2f-d0e3301064e0","Type":"ContainerDied","Data":"6388d643bba207c8d10ddbfeeb81fd32a85c6d4dfc810671a8267b24564da4dd"} Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.396114 4729 scope.go:117] "RemoveContainer" containerID="7ee00ea01460c40d193e8a54f8162c0aac6a559b981a90d97cd421be3693751f" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.401904 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" event={"ID":"c814f616-ec2d-45bf-9f63-04e825e85942","Type":"ContainerDied","Data":"83fe44f7bfcbbb4bbd64f74cf4c75041c55cc6d7c24b2b64b8ebb38f66d40957"} Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.402050 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-j2ktx" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.441265 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.450909 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qfcl6"] Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.456015 4729 scope.go:117] "RemoveContainer" containerID="1060bdf4fe2799f1353504426ff6b1de15fc817b88df76f7d2864fc2a7d16203" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.461153 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c814f616-ec2d-45bf-9f63-04e825e85942-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.471713 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-z4znt"] Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.497472 4729 scope.go:117] "RemoveContainer" containerID="72475453719cdb712e20a9b439107936af647791a03204bc829a996b9ac769bf" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.526129 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.535960 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-j2ktx"] Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.673849 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.771686 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 27 14:30:56 crc kubenswrapper[4729]: I0127 14:30:56.926073 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:30:57 crc kubenswrapper[4729]: I0127 14:30:57.414975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" event={"ID":"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3","Type":"ContainerStarted","Data":"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387"} Jan 27 14:30:57 crc kubenswrapper[4729]: I0127 14:30:57.416273 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:30:57 crc kubenswrapper[4729]: I0127 14:30:57.431346 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerStarted","Data":"a0c233716c4a01787aefd955aeb3845c44cbcc26c5938224e84faa71f80af099"} Jan 27 14:30:57 crc kubenswrapper[4729]: I0127 14:30:57.439707 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" podStartSLOduration=11.43969371 podStartE2EDuration="11.43969371s" podCreationTimestamp="2026-01-27 14:30:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:30:57.437016677 +0000 UTC m=+1544.021207701" watchObservedRunningTime="2026-01-27 14:30:57.43969371 +0000 UTC m=+1544.023884704" Jan 27 14:30:57 crc kubenswrapper[4729]: I0127 14:30:57.442070 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z4znt" event={"ID":"fb94bfab-bf68-4e03-9a32-b4de4d765b1f","Type":"ContainerStarted","Data":"a621ff7aede60b8b1ba1049c1c8b3efeae3b14b34f565afaea6cc0f645248b1c"} Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.071754 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" path="/var/lib/kubelet/pods/b07f2206-b339-4038-be2f-d0e3301064e0/volumes" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.073115 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c814f616-ec2d-45bf-9f63-04e825e85942" path="/var/lib/kubelet/pods/c814f616-ec2d-45bf-9f63-04e825e85942/volumes" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.190413 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lhgr5"] Jan 27 14:30:58 crc kubenswrapper[4729]: E0127 14:30:58.191169 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c814f616-ec2d-45bf-9f63-04e825e85942" containerName="init" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.191193 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c814f616-ec2d-45bf-9f63-04e825e85942" containerName="init" Jan 27 14:30:58 crc kubenswrapper[4729]: E0127 14:30:58.191217 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="dnsmasq-dns" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.191224 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="dnsmasq-dns" Jan 27 14:30:58 crc kubenswrapper[4729]: E0127 14:30:58.191270 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="init" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.191280 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="init" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.191514 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="dnsmasq-dns" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.191537 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c814f616-ec2d-45bf-9f63-04e825e85942" containerName="init" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.192335 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.201107 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lhgr5"] Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.315943 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.316031 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4t5\" (UniqueName: \"kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.319047 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-14a7-account-create-update-q9mp6"] Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.321586 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.328445 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.334199 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-14a7-account-create-update-q9mp6"] Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.422846 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.422983 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.423060 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4t5\" (UniqueName: \"kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.423315 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28q8t\" (UniqueName: \"kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.423703 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.442329 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4t5\" (UniqueName: \"kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5\") pod \"glance-db-create-lhgr5\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.451433 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c1da68b-399e-4543-918f-6deed78e3626","Type":"ContainerStarted","Data":"5fa16b77e65200d206ccd6f2799927e8f0ce805cb1e9ebf28ea80679e6967992"} Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.451541 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6c1da68b-399e-4543-918f-6deed78e3626","Type":"ContainerStarted","Data":"45e31a3914c5313808a56dfcff2a3e3d37f4fa3bc9dd2939296c67775c0250ba"} Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.452392 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.480436 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.401165217 podStartE2EDuration="12.48041803s" podCreationTimestamp="2026-01-27 14:30:46 +0000 UTC" firstStartedPulling="2026-01-27 14:30:48.219918572 +0000 UTC m=+1534.804109576" lastFinishedPulling="2026-01-27 14:30:57.299171395 +0000 UTC m=+1543.883362389" observedRunningTime="2026-01-27 14:30:58.478388984 +0000 UTC m=+1545.062580008" watchObservedRunningTime="2026-01-27 14:30:58.48041803 +0000 UTC m=+1545.064609034" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.523164 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lhgr5" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.525741 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28q8t\" (UniqueName: \"kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.525855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.526540 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.546048 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28q8t\" (UniqueName: \"kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t\") pod \"glance-14a7-account-create-update-q9mp6\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.645952 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:30:58 crc kubenswrapper[4729]: I0127 14:30:58.652393 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-qfcl6" podUID="b07f2206-b339-4038-be2f-d0e3301064e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 27 14:30:59 crc kubenswrapper[4729]: I0127 14:30:59.938139 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fhhx5"] Jan 27 14:30:59 crc kubenswrapper[4729]: I0127 14:30:59.941899 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhx5" Jan 27 14:30:59 crc kubenswrapper[4729]: I0127 14:30:59.945299 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 27 14:30:59 crc kubenswrapper[4729]: I0127 14:30:59.950598 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fhhx5"] Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.058289 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpsg\" (UniqueName: \"kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.058365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.161310 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpsg\" (UniqueName: \"kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.161733 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.163289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.186091 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpsg\" (UniqueName: \"kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg\") pod \"root-account-create-update-fhhx5\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.268189 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:00 crc kubenswrapper[4729]: W0127 14:31:00.700972 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a37f0cf_1920_436e_b40d_ba267fb85828.slice/crio-bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b WatchSource:0}: Error finding container bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b: Status 404 returned error can't find the container with id bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.708463 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lhgr5"] Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.877293 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-14a7-account-create-update-q9mp6"] Jan 27 14:31:00 crc kubenswrapper[4729]: W0127 14:31:00.885832 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaaf1648_247f_4fe8_aa81_3860b683d405.slice/crio-d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e WatchSource:0}: Error finding container d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e: Status 404 returned error can't find the container with id d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e Jan 27 14:31:00 crc kubenswrapper[4729]: W0127 14:31:00.887342 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode78fb6de_dff5_4785_90ac_bfda868d9d12.slice/crio-d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf WatchSource:0}: Error finding container d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf: Status 404 returned error can't find the container with id d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf Jan 27 14:31:00 crc kubenswrapper[4729]: I0127 14:31:00.893370 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fhhx5"] Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.479981 4729 generic.go:334] "Generic (PLEG): container finished" podID="eaaf1648-247f-4fe8-aa81-3860b683d405" containerID="d3dc3d8fbfb4f2ff47d1f545a3371f55ce4f69874c6aa23dfa46dc2a802e8f97" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.480076 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhx5" event={"ID":"eaaf1648-247f-4fe8-aa81-3860b683d405","Type":"ContainerDied","Data":"d3dc3d8fbfb4f2ff47d1f545a3371f55ce4f69874c6aa23dfa46dc2a802e8f97"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.480341 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhx5" event={"ID":"eaaf1648-247f-4fe8-aa81-3860b683d405","Type":"ContainerStarted","Data":"d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.483270 4729 generic.go:334] "Generic (PLEG): container finished" podID="e78fb6de-dff5-4785-90ac-bfda868d9d12" containerID="0c06215c5f758770f50428174ac0270a4d2cd81fe0f70dc84d40b54054566404" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.483319 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-14a7-account-create-update-q9mp6" event={"ID":"e78fb6de-dff5-4785-90ac-bfda868d9d12","Type":"ContainerDied","Data":"0c06215c5f758770f50428174ac0270a4d2cd81fe0f70dc84d40b54054566404"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.483371 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-14a7-account-create-update-q9mp6" event={"ID":"e78fb6de-dff5-4785-90ac-bfda868d9d12","Type":"ContainerStarted","Data":"d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.486667 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerStarted","Data":"5a91e205df2de1035e3bab27ad791de5d5a5303cbea68d0070693cea5ac8636d"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.488624 4729 generic.go:334] "Generic (PLEG): container finished" podID="7a37f0cf-1920-436e-b40d-ba267fb85828" containerID="2447ecda9cd2fd75789f474b077981452246b1878e2ab3fdce9805758e83eb6f" exitCode=0 Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.488655 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lhgr5" event={"ID":"7a37f0cf-1920-436e-b40d-ba267fb85828","Type":"ContainerDied","Data":"2447ecda9cd2fd75789f474b077981452246b1878e2ab3fdce9805758e83eb6f"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.488687 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lhgr5" event={"ID":"7a37f0cf-1920-436e-b40d-ba267fb85828","Type":"ContainerStarted","Data":"bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.490628 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z4znt" event={"ID":"fb94bfab-bf68-4e03-9a32-b4de4d765b1f","Type":"ContainerStarted","Data":"21cbba3b443fbaa5293304e2724938a0107380d81f34fa5d335fc8148ad266f6"} Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.522910 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-z4znt" podStartSLOduration=8.871553136 podStartE2EDuration="12.522869662s" podCreationTimestamp="2026-01-27 14:30:49 +0000 UTC" firstStartedPulling="2026-01-27 14:30:56.657896882 +0000 UTC m=+1543.242087886" lastFinishedPulling="2026-01-27 14:31:00.309213408 +0000 UTC m=+1546.893404412" observedRunningTime="2026-01-27 14:31:01.517308711 +0000 UTC m=+1548.101499755" watchObservedRunningTime="2026-01-27 14:31:01.522869662 +0000 UTC m=+1548.107060666" Jan 27 14:31:01 crc kubenswrapper[4729]: I0127 14:31:01.695465 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:31:01 crc kubenswrapper[4729]: E0127 14:31:01.695630 4729 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 27 14:31:01 crc kubenswrapper[4729]: E0127 14:31:01.695769 4729 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 27 14:31:01 crc kubenswrapper[4729]: E0127 14:31:01.695833 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift podName:399f6c9f-a3d5-4235-bce9-f3623e6be7f4 nodeName:}" failed. No retries permitted until 2026-01-27 14:31:17.695812998 +0000 UTC m=+1564.280004002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift") pod "swift-storage-0" (UID: "399f6c9f-a3d5-4235-bce9-f3623e6be7f4") : configmap "swift-ring-files" not found Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.051831 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:31:02 crc kubenswrapper[4729]: E0127 14:31:02.052183 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.249141 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.312731 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.313045 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d65f699f-w269z" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="dnsmasq-dns" containerID="cri-o://8ac129e8e4f56ce5caddb3d023c0ff07d4ddea68ddae39f42b1ed4178e9f3128" gracePeriod=10 Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.387603 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nzfnl"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.389827 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.407196 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nzfnl"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.505280 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4dc2-account-create-update-bqd4c"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.507279 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.510154 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.514365 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.514509 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksrpn\" (UniqueName: \"kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.525534 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4dc2-account-create-update-bqd4c"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.528129 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-w269z" event={"ID":"48f97a04-f193-482e-a7c9-819fb926008f","Type":"ContainerDied","Data":"8ac129e8e4f56ce5caddb3d023c0ff07d4ddea68ddae39f42b1ed4178e9f3128"} Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.528162 4729 generic.go:334] "Generic (PLEG): container finished" podID="48f97a04-f193-482e-a7c9-819fb926008f" containerID="8ac129e8e4f56ce5caddb3d023c0ff07d4ddea68ddae39f42b1ed4178e9f3128" exitCode=0 Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.616336 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.616406 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwrv\" (UniqueName: \"kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.616489 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksrpn\" (UniqueName: \"kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.616531 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.617637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.642137 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksrpn\" (UniqueName: \"kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn\") pod \"keystone-db-create-nzfnl\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.683826 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lzl7v"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.685411 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.705116 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lzl7v"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.718501 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkwrv\" (UniqueName: \"kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.718582 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.719286 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.721724 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.742386 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkwrv\" (UniqueName: \"kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv\") pod \"keystone-4dc2-account-create-update-bqd4c\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.813138 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8d6c-account-create-update-2vb5h"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.814740 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.819372 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.821527 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.821659 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjq2w\" (UniqueName: \"kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.824782 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d6c-account-create-update-2vb5h"] Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.849053 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.928115 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.928537 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjq2w\" (UniqueName: \"kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.928749 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv8r7\" (UniqueName: \"kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.929055 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.933564 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:02 crc kubenswrapper[4729]: I0127 14:31:02.955376 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjq2w\" (UniqueName: \"kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w\") pod \"placement-db-create-lzl7v\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.020269 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.031047 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.031184 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv8r7\" (UniqueName: \"kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.032002 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.050442 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv8r7\" (UniqueName: \"kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7\") pod \"placement-8d6c-account-create-update-2vb5h\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.072738 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.193152 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.242048 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gk2cz" podUID="5e4b5a47-ff01-4fd6-b69f-4d70efc77a12" containerName="ovn-controller" probeResult="failure" output=< Jan 27 14:31:03 crc kubenswrapper[4729]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 27 14:31:03 crc kubenswrapper[4729]: > Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.245872 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc\") pod \"48f97a04-f193-482e-a7c9-819fb926008f\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.246210 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config\") pod \"48f97a04-f193-482e-a7c9-819fb926008f\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.246281 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb\") pod \"48f97a04-f193-482e-a7c9-819fb926008f\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.246431 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-662gm\" (UniqueName: \"kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm\") pod \"48f97a04-f193-482e-a7c9-819fb926008f\" (UID: \"48f97a04-f193-482e-a7c9-819fb926008f\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.309807 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm" (OuterVolumeSpecName: "kube-api-access-662gm") pod "48f97a04-f193-482e-a7c9-819fb926008f" (UID: "48f97a04-f193-482e-a7c9-819fb926008f"). InnerVolumeSpecName "kube-api-access-662gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.313584 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.338119 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gsqqc" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.338895 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config" (OuterVolumeSpecName: "config") pod "48f97a04-f193-482e-a7c9-819fb926008f" (UID: "48f97a04-f193-482e-a7c9-819fb926008f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.343981 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48f97a04-f193-482e-a7c9-819fb926008f" (UID: "48f97a04-f193-482e-a7c9-819fb926008f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.361673 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.361715 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.361736 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-662gm\" (UniqueName: \"kubernetes.io/projected/48f97a04-f193-482e-a7c9-819fb926008f-kube-api-access-662gm\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.473842 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48f97a04-f193-482e-a7c9-819fb926008f" (UID: "48f97a04-f193-482e-a7c9-819fb926008f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.478806 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48f97a04-f193-482e-a7c9-819fb926008f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.591297 4729 generic.go:334] "Generic (PLEG): container finished" podID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerID="d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e" exitCode=0 Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.591409 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerDied","Data":"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e"} Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.598765 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d65f699f-w269z" event={"ID":"48f97a04-f193-482e-a7c9-819fb926008f","Type":"ContainerDied","Data":"177e76ac38bf8d1904ca8bbe6b37735ee5e53ce3a44ff0e0208a694e30dc0c3e"} Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.598821 4729 scope.go:117] "RemoveContainer" containerID="8ac129e8e4f56ce5caddb3d023c0ff07d4ddea68ddae39f42b1ed4178e9f3128" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.599012 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d65f699f-w269z" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.628590 4729 generic.go:334] "Generic (PLEG): container finished" podID="190a5200-58b1-4ada-ab5f-47543de0795e" containerID="2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742" exitCode=0 Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.629843 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerDied","Data":"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742"} Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.675618 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gk2cz-config-vc7pl"] Jan 27 14:31:03 crc kubenswrapper[4729]: E0127 14:31:03.677521 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="init" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.677537 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="init" Jan 27 14:31:03 crc kubenswrapper[4729]: E0127 14:31:03.677578 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="dnsmasq-dns" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.677586 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="dnsmasq-dns" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.677796 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f97a04-f193-482e-a7c9-819fb926008f" containerName="dnsmasq-dns" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.678527 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.682601 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.691944 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gk2cz-config-vc7pl"] Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.727042 4729 scope.go:117] "RemoveContainer" containerID="60802f82cf6fee0e9ee3a7b3381b4043a7f2c0711a44219f3adb78d283e28b39" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.750840 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.774785 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.789069 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.796691 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts\") pod \"e78fb6de-dff5-4785-90ac-bfda868d9d12\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.796736 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfpsg\" (UniqueName: \"kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg\") pod \"eaaf1648-247f-4fe8-aa81-3860b683d405\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.796800 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts\") pod \"eaaf1648-247f-4fe8-aa81-3860b683d405\" (UID: \"eaaf1648-247f-4fe8-aa81-3860b683d405\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797175 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797297 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797329 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797358 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh42q\" (UniqueName: \"kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797476 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797532 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.797175 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e78fb6de-dff5-4785-90ac-bfda868d9d12" (UID: "e78fb6de-dff5-4785-90ac-bfda868d9d12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.798014 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eaaf1648-247f-4fe8-aa81-3860b683d405" (UID: "eaaf1648-247f-4fe8-aa81-3860b683d405"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.801349 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d65f699f-w269z"] Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.801841 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg" (OuterVolumeSpecName: "kube-api-access-gfpsg") pod "eaaf1648-247f-4fe8-aa81-3860b683d405" (UID: "eaaf1648-247f-4fe8-aa81-3860b683d405"). InnerVolumeSpecName "kube-api-access-gfpsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.805253 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lhgr5" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.898562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28q8t\" (UniqueName: \"kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t\") pod \"e78fb6de-dff5-4785-90ac-bfda868d9d12\" (UID: \"e78fb6de-dff5-4785-90ac-bfda868d9d12\") " Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899461 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899552 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899576 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh42q\" (UniqueName: \"kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899681 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899730 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899894 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e78fb6de-dff5-4785-90ac-bfda868d9d12-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899907 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899920 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfpsg\" (UniqueName: \"kubernetes.io/projected/eaaf1648-247f-4fe8-aa81-3860b683d405-kube-api-access-gfpsg\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.899934 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eaaf1648-247f-4fe8-aa81-3860b683d405-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.900080 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.900170 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.906470 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t" (OuterVolumeSpecName: "kube-api-access-28q8t") pod "e78fb6de-dff5-4785-90ac-bfda868d9d12" (UID: "e78fb6de-dff5-4785-90ac-bfda868d9d12"). InnerVolumeSpecName "kube-api-access-28q8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.909013 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.913567 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:03 crc kubenswrapper[4729]: I0127 14:31:03.940941 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh42q\" (UniqueName: \"kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q\") pod \"ovn-controller-gk2cz-config-vc7pl\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.001578 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv4t5\" (UniqueName: \"kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5\") pod \"7a37f0cf-1920-436e-b40d-ba267fb85828\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.002345 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts\") pod \"7a37f0cf-1920-436e-b40d-ba267fb85828\" (UID: \"7a37f0cf-1920-436e-b40d-ba267fb85828\") " Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.003846 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28q8t\" (UniqueName: \"kubernetes.io/projected/e78fb6de-dff5-4785-90ac-bfda868d9d12-kube-api-access-28q8t\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.004196 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a37f0cf-1920-436e-b40d-ba267fb85828" (UID: "7a37f0cf-1920-436e-b40d-ba267fb85828"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.006470 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5" (OuterVolumeSpecName: "kube-api-access-zv4t5") pod "7a37f0cf-1920-436e-b40d-ba267fb85828" (UID: "7a37f0cf-1920-436e-b40d-ba267fb85828"). InnerVolumeSpecName "kube-api-access-zv4t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.097255 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.116188 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zv4t5\" (UniqueName: \"kubernetes.io/projected/7a37f0cf-1920-436e-b40d-ba267fb85828-kube-api-access-zv4t5\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.116234 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a37f0cf-1920-436e-b40d-ba267fb85828-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.130567 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f97a04-f193-482e-a7c9-819fb926008f" path="/var/lib/kubelet/pods/48f97a04-f193-482e-a7c9-819fb926008f/volumes" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.133637 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nzfnl"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.133674 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4dc2-account-create-update-bqd4c"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.225840 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lzl7v"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.371080 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8d6c-account-create-update-2vb5h"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.404966 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-gt6kg"] Jan 27 14:31:04 crc kubenswrapper[4729]: E0127 14:31:04.405516 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78fb6de-dff5-4785-90ac-bfda868d9d12" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405532 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78fb6de-dff5-4785-90ac-bfda868d9d12" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: E0127 14:31:04.405558 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaf1648-247f-4fe8-aa81-3860b683d405" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405568 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaf1648-247f-4fe8-aa81-3860b683d405" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: E0127 14:31:04.405611 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a37f0cf-1920-436e-b40d-ba267fb85828" containerName="mariadb-database-create" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405619 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a37f0cf-1920-436e-b40d-ba267fb85828" containerName="mariadb-database-create" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405902 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaaf1648-247f-4fe8-aa81-3860b683d405" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405918 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78fb6de-dff5-4785-90ac-bfda868d9d12" containerName="mariadb-account-create-update" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.405939 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a37f0cf-1920-436e-b40d-ba267fb85828" containerName="mariadb-database-create" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.406824 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.426057 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-gt6kg"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.444506 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.444596 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtzbc\" (UniqueName: \"kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.549808 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.549948 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtzbc\" (UniqueName: \"kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.551984 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.605514 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtzbc\" (UniqueName: \"kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc\") pod \"mysqld-exporter-openstack-db-create-gt6kg\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.676999 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-d33f-account-create-update-jrz6t"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.678692 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.681855 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.692278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerStarted","Data":"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.692911 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.735510 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerStarted","Data":"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.738054 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.751785 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc2-account-create-update-bqd4c" event={"ID":"b6276002-6f52-4082-b413-767e4b80717a","Type":"ContainerStarted","Data":"aa65169390446ab20256a54d4a230ed0d35b4a5631f18985f4de82504c0a4b1e"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.751831 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc2-account-create-update-bqd4c" event={"ID":"b6276002-6f52-4082-b413-767e4b80717a","Type":"ContainerStarted","Data":"03082337a8d397d7a5ea7682982b93b1568e10fb568fa4df32a67bb0e833138d"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.760294 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-14a7-account-create-update-q9mp6" event={"ID":"e78fb6de-dff5-4785-90ac-bfda868d9d12","Type":"ContainerDied","Data":"d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.760342 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4a39633ce00d6de94ca489f55448ab61493064733ae07e391d78b9306fe17cf" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.760425 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-14a7-account-create-update-q9mp6" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.763714 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lzl7v" event={"ID":"45aa9c0b-9db2-415e-9cc9-f552f9127f34","Type":"ContainerStarted","Data":"ee1db53742fe975639492ff9008734974b4a4e60fecbd41d9c24c9a268e5e02c"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.763773 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lzl7v" event={"ID":"45aa9c0b-9db2-415e-9cc9-f552f9127f34","Type":"ContainerStarted","Data":"90e1b6e5a14152d9af22d40c7aa46e437d212a26befc55ac164248bad8d99665"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.766273 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.771024 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lhgr5" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.771301 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d33f-account-create-update-jrz6t"] Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.771332 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lhgr5" event={"ID":"7a37f0cf-1920-436e-b40d-ba267fb85828","Type":"ContainerDied","Data":"bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.771354 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf83d7f723c6d182eff3aa156236f645b8da9f2df59d7bdd35242e09b92515b" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.775590 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d6c-account-create-update-2vb5h" event={"ID":"5a65f219-5a7b-4c05-bf31-9faa4c7490a2","Type":"ContainerStarted","Data":"74de370e1fee2f1cc11bca45c17d6f73f024130dde458a558b2c3b7eff34fa3d"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.794235 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fhhx5" event={"ID":"eaaf1648-247f-4fe8-aa81-3860b683d405","Type":"ContainerDied","Data":"d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.794498 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0980a9c2a6881bdec08e2b62165dc19bb24a55c657887df90ab5118f6573e8e" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.795032 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fhhx5" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.813991 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzfnl" event={"ID":"07fc7673-9046-4fee-a6f2-060f5566f405","Type":"ContainerStarted","Data":"1fdde86359780afeb3f8c99fa5cec1d954117e593f40316d1ba30d9ce6e8ae32"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.814039 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzfnl" event={"ID":"07fc7673-9046-4fee-a6f2-060f5566f405","Type":"ContainerStarted","Data":"a861551a0ca0d926860a836cdfef046ea61b6ff99d1f9871958f59557df653d1"} Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.848326 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.738002976 podStartE2EDuration="1m26.84830258s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:41.032956479 +0000 UTC m=+1467.617147483" lastFinishedPulling="2026-01-27 14:30:28.143256083 +0000 UTC m=+1514.727447087" observedRunningTime="2026-01-27 14:31:04.725431183 +0000 UTC m=+1551.309622197" watchObservedRunningTime="2026-01-27 14:31:04.84830258 +0000 UTC m=+1551.432493584" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.862445 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.863192 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw7sz\" (UniqueName: \"kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.879098 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371949.9757 podStartE2EDuration="1m26.879076235s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:41.297969095 +0000 UTC m=+1467.882160099" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:04.770599619 +0000 UTC m=+1551.354790643" watchObservedRunningTime="2026-01-27 14:31:04.879076235 +0000 UTC m=+1551.463267240" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.912776 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-lzl7v" podStartSLOduration=2.91275824 podStartE2EDuration="2.91275824s" podCreationTimestamp="2026-01-27 14:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:04.794782006 +0000 UTC m=+1551.378973010" watchObservedRunningTime="2026-01-27 14:31:04.91275824 +0000 UTC m=+1551.496949244" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.947357 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-4dc2-account-create-update-bqd4c" podStartSLOduration=2.9473384879999998 podStartE2EDuration="2.947338488s" podCreationTimestamp="2026-01-27 14:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:04.812947839 +0000 UTC m=+1551.397138843" watchObservedRunningTime="2026-01-27 14:31:04.947338488 +0000 UTC m=+1551.531529492" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.965812 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw7sz\" (UniqueName: \"kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.965987 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.977791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:04 crc kubenswrapper[4729]: I0127 14:31:04.995143 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw7sz\" (UniqueName: \"kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz\") pod \"mysqld-exporter-d33f-account-create-update-jrz6t\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.091105 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gk2cz-config-vc7pl"] Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.151550 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.458602 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-gt6kg"] Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.799680 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-d33f-account-create-update-jrz6t"] Jan 27 14:31:05 crc kubenswrapper[4729]: W0127 14:31:05.811668 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b7c01e0_0dfb_41c8_89f7_0eb5c38eeda4.slice/crio-4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91 WatchSource:0}: Error finding container 4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91: Status 404 returned error can't find the container with id 4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91 Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.829259 4729 generic.go:334] "Generic (PLEG): container finished" podID="07fc7673-9046-4fee-a6f2-060f5566f405" containerID="1fdde86359780afeb3f8c99fa5cec1d954117e593f40316d1ba30d9ce6e8ae32" exitCode=0 Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.829348 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzfnl" event={"ID":"07fc7673-9046-4fee-a6f2-060f5566f405","Type":"ContainerDied","Data":"1fdde86359780afeb3f8c99fa5cec1d954117e593f40316d1ba30d9ce6e8ae32"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.832349 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d6c-account-create-update-2vb5h" event={"ID":"5a65f219-5a7b-4c05-bf31-9faa4c7490a2","Type":"ContainerStarted","Data":"eb472908af5e413325381e0b0e7e45313ff7873ed1bc536bcbecde504b6f4850"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.836388 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz-config-vc7pl" event={"ID":"ad36c03d-b2f2-48bf-92b0-f45dd39251b1","Type":"ContainerStarted","Data":"423747c95184e848b8c7fd0bc26a76f0d6f5a246f87678fdde9714c2c7f4fcc5"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.836442 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz-config-vc7pl" event={"ID":"ad36c03d-b2f2-48bf-92b0-f45dd39251b1","Type":"ContainerStarted","Data":"03f05199eed62c7393130154ae071caea8f49c0096321d876c8921bfa6f0551a"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.841093 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" event={"ID":"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc","Type":"ContainerStarted","Data":"c9b88a9af52a3461fe638d093e6de199de47a00806157632d140e8cab67ea731"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.846259 4729 generic.go:334] "Generic (PLEG): container finished" podID="b6276002-6f52-4082-b413-767e4b80717a" containerID="aa65169390446ab20256a54d4a230ed0d35b4a5631f18985f4de82504c0a4b1e" exitCode=0 Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.846346 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc2-account-create-update-bqd4c" event={"ID":"b6276002-6f52-4082-b413-767e4b80717a","Type":"ContainerDied","Data":"aa65169390446ab20256a54d4a230ed0d35b4a5631f18985f4de82504c0a4b1e"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.848982 4729 generic.go:334] "Generic (PLEG): container finished" podID="45aa9c0b-9db2-415e-9cc9-f552f9127f34" containerID="ee1db53742fe975639492ff9008734974b4a4e60fecbd41d9c24c9a268e5e02c" exitCode=0 Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.849068 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lzl7v" event={"ID":"45aa9c0b-9db2-415e-9cc9-f552f9127f34","Type":"ContainerDied","Data":"ee1db53742fe975639492ff9008734974b4a4e60fecbd41d9c24c9a268e5e02c"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.851470 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" event={"ID":"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4","Type":"ContainerStarted","Data":"4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91"} Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.858111 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8d6c-account-create-update-2vb5h" podStartSLOduration=3.858095939 podStartE2EDuration="3.858095939s" podCreationTimestamp="2026-01-27 14:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:05.857820342 +0000 UTC m=+1552.442011346" watchObservedRunningTime="2026-01-27 14:31:05.858095939 +0000 UTC m=+1552.442286943" Jan 27 14:31:05 crc kubenswrapper[4729]: I0127 14:31:05.885086 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gk2cz-config-vc7pl" podStartSLOduration=2.885067081 podStartE2EDuration="2.885067081s" podCreationTimestamp="2026-01-27 14:31:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:05.883606131 +0000 UTC m=+1552.467797155" watchObservedRunningTime="2026-01-27 14:31:05.885067081 +0000 UTC m=+1552.469258085" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.311908 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fhhx5"] Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.327918 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fhhx5"] Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.392055 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.522678 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts\") pod \"07fc7673-9046-4fee-a6f2-060f5566f405\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.522721 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksrpn\" (UniqueName: \"kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn\") pod \"07fc7673-9046-4fee-a6f2-060f5566f405\" (UID: \"07fc7673-9046-4fee-a6f2-060f5566f405\") " Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.523281 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07fc7673-9046-4fee-a6f2-060f5566f405" (UID: "07fc7673-9046-4fee-a6f2-060f5566f405"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.528726 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn" (OuterVolumeSpecName: "kube-api-access-ksrpn") pod "07fc7673-9046-4fee-a6f2-060f5566f405" (UID: "07fc7673-9046-4fee-a6f2-060f5566f405"). InnerVolumeSpecName "kube-api-access-ksrpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.625357 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07fc7673-9046-4fee-a6f2-060f5566f405-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.625412 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksrpn\" (UniqueName: \"kubernetes.io/projected/07fc7673-9046-4fee-a6f2-060f5566f405-kube-api-access-ksrpn\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.878546 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzfnl" event={"ID":"07fc7673-9046-4fee-a6f2-060f5566f405","Type":"ContainerDied","Data":"a861551a0ca0d926860a836cdfef046ea61b6ff99d1f9871958f59557df653d1"} Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.878591 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a861551a0ca0d926860a836cdfef046ea61b6ff99d1f9871958f59557df653d1" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.878643 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzfnl" Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.880824 4729 generic.go:334] "Generic (PLEG): container finished" podID="5a65f219-5a7b-4c05-bf31-9faa4c7490a2" containerID="eb472908af5e413325381e0b0e7e45313ff7873ed1bc536bcbecde504b6f4850" exitCode=0 Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.880916 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d6c-account-create-update-2vb5h" event={"ID":"5a65f219-5a7b-4c05-bf31-9faa4c7490a2","Type":"ContainerDied","Data":"eb472908af5e413325381e0b0e7e45313ff7873ed1bc536bcbecde504b6f4850"} Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.883287 4729 generic.go:334] "Generic (PLEG): container finished" podID="ad36c03d-b2f2-48bf-92b0-f45dd39251b1" containerID="423747c95184e848b8c7fd0bc26a76f0d6f5a246f87678fdde9714c2c7f4fcc5" exitCode=0 Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.883344 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz-config-vc7pl" event={"ID":"ad36c03d-b2f2-48bf-92b0-f45dd39251b1","Type":"ContainerDied","Data":"423747c95184e848b8c7fd0bc26a76f0d6f5a246f87678fdde9714c2c7f4fcc5"} Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.885676 4729 generic.go:334] "Generic (PLEG): container finished" podID="3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" containerID="477a244569fc9505ad6844c4dfd812bdb6c5eb8d3a7ff9e9cfe829e1a48cef68" exitCode=0 Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.885752 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" event={"ID":"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc","Type":"ContainerDied","Data":"477a244569fc9505ad6844c4dfd812bdb6c5eb8d3a7ff9e9cfe829e1a48cef68"} Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.888125 4729 generic.go:334] "Generic (PLEG): container finished" podID="d148c837-c681-4446-9e81-195c19108d09" containerID="920267262e426a5814f7dcc9824fbb8d062ed5b8d587b97e2c1394ef8c992b51" exitCode=0 Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.888178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerDied","Data":"920267262e426a5814f7dcc9824fbb8d062ed5b8d587b97e2c1394ef8c992b51"} Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.893953 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" containerID="db76f0122c0e55869d6636f8930ab13d354c173c565d70f3e568b967a12729e2" exitCode=0 Jan 27 14:31:06 crc kubenswrapper[4729]: I0127 14:31:06.894216 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" event={"ID":"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4","Type":"ContainerDied","Data":"db76f0122c0e55869d6636f8930ab13d354c173c565d70f3e568b967a12729e2"} Jan 27 14:31:07 crc kubenswrapper[4729]: I0127 14:31:07.362531 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.098765 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaaf1648-247f-4fe8-aa81-3860b683d405" path="/var/lib/kubelet/pods/eaaf1648-247f-4fe8-aa81-3860b683d405/volumes" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.213348 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gk2cz" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.350004 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.361597 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.496761 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts\") pod \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.498203 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45aa9c0b-9db2-415e-9cc9-f552f9127f34" (UID: "45aa9c0b-9db2-415e-9cc9-f552f9127f34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.498259 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts\") pod \"b6276002-6f52-4082-b413-767e4b80717a\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.498333 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkwrv\" (UniqueName: \"kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv\") pod \"b6276002-6f52-4082-b413-767e4b80717a\" (UID: \"b6276002-6f52-4082-b413-767e4b80717a\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.498526 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjq2w\" (UniqueName: \"kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w\") pod \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\" (UID: \"45aa9c0b-9db2-415e-9cc9-f552f9127f34\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.498800 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6276002-6f52-4082-b413-767e4b80717a" (UID: "b6276002-6f52-4082-b413-767e4b80717a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.500441 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45aa9c0b-9db2-415e-9cc9-f552f9127f34-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.500466 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6276002-6f52-4082-b413-767e4b80717a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.505677 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w" (OuterVolumeSpecName: "kube-api-access-kjq2w") pod "45aa9c0b-9db2-415e-9cc9-f552f9127f34" (UID: "45aa9c0b-9db2-415e-9cc9-f552f9127f34"). InnerVolumeSpecName "kube-api-access-kjq2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.519156 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv" (OuterVolumeSpecName: "kube-api-access-hkwrv") pod "b6276002-6f52-4082-b413-767e4b80717a" (UID: "b6276002-6f52-4082-b413-767e4b80717a"). InnerVolumeSpecName "kube-api-access-hkwrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.602354 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkwrv\" (UniqueName: \"kubernetes.io/projected/b6276002-6f52-4082-b413-767e4b80717a-kube-api-access-hkwrv\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.602628 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjq2w\" (UniqueName: \"kubernetes.io/projected/45aa9c0b-9db2-415e-9cc9-f552f9127f34-kube-api-access-kjq2w\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.624697 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mvsqd"] Jan 27 14:31:08 crc kubenswrapper[4729]: E0127 14:31:08.625241 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45aa9c0b-9db2-415e-9cc9-f552f9127f34" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625263 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="45aa9c0b-9db2-415e-9cc9-f552f9127f34" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: E0127 14:31:08.625287 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6276002-6f52-4082-b413-767e4b80717a" containerName="mariadb-account-create-update" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625295 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6276002-6f52-4082-b413-767e4b80717a" containerName="mariadb-account-create-update" Jan 27 14:31:08 crc kubenswrapper[4729]: E0127 14:31:08.625323 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07fc7673-9046-4fee-a6f2-060f5566f405" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625333 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="07fc7673-9046-4fee-a6f2-060f5566f405" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625566 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="07fc7673-9046-4fee-a6f2-060f5566f405" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625586 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="45aa9c0b-9db2-415e-9cc9-f552f9127f34" containerName="mariadb-database-create" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.625608 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6276002-6f52-4082-b413-767e4b80717a" containerName="mariadb-account-create-update" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.626532 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.643327 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mvsqd"] Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.643983 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.643982 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fp622" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.707225 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.707593 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.707812 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dbr\" (UniqueName: \"kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.707944 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.796366 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.809614 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.809737 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.809893 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dbr\" (UniqueName: \"kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.818046 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.824642 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.825299 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.829816 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.837020 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dbr\" (UniqueName: \"kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr\") pod \"glance-db-sync-mvsqd\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.917468 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.920255 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv8r7\" (UniqueName: \"kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7\") pod \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.920319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts\") pod \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\" (UID: \"5a65f219-5a7b-4c05-bf31-9faa4c7490a2\") " Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.921584 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a65f219-5a7b-4c05-bf31-9faa4c7490a2" (UID: "5a65f219-5a7b-4c05-bf31-9faa4c7490a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.925550 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7" (OuterVolumeSpecName: "kube-api-access-qv8r7") pod "5a65f219-5a7b-4c05-bf31-9faa4c7490a2" (UID: "5a65f219-5a7b-4c05-bf31-9faa4c7490a2"). InnerVolumeSpecName "kube-api-access-qv8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.938092 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4dc2-account-create-update-bqd4c" event={"ID":"b6276002-6f52-4082-b413-767e4b80717a","Type":"ContainerDied","Data":"03082337a8d397d7a5ea7682982b93b1568e10fb568fa4df32a67bb0e833138d"} Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.938137 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03082337a8d397d7a5ea7682982b93b1568e10fb568fa4df32a67bb0e833138d" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.938196 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4dc2-account-create-update-bqd4c" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.959378 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mvsqd" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.994594 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lzl7v" event={"ID":"45aa9c0b-9db2-415e-9cc9-f552f9127f34","Type":"ContainerDied","Data":"90e1b6e5a14152d9af22d40c7aa46e437d212a26befc55ac164248bad8d99665"} Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.994652 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e1b6e5a14152d9af22d40c7aa46e437d212a26befc55ac164248bad8d99665" Jan 27 14:31:08 crc kubenswrapper[4729]: I0127 14:31:08.994727 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lzl7v" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.000708 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8d6c-account-create-update-2vb5h" event={"ID":"5a65f219-5a7b-4c05-bf31-9faa4c7490a2","Type":"ContainerDied","Data":"74de370e1fee2f1cc11bca45c17d6f73f024130dde458a558b2c3b7eff34fa3d"} Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.000756 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74de370e1fee2f1cc11bca45c17d6f73f024130dde458a558b2c3b7eff34fa3d" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.000819 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8d6c-account-create-update-2vb5h" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.024435 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gk2cz-config-vc7pl" event={"ID":"ad36c03d-b2f2-48bf-92b0-f45dd39251b1","Type":"ContainerDied","Data":"03f05199eed62c7393130154ae071caea8f49c0096321d876c8921bfa6f0551a"} Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.024487 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f05199eed62c7393130154ae071caea8f49c0096321d876c8921bfa6f0551a" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.024561 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gk2cz-config-vc7pl" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.048653 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts" (OuterVolumeSpecName: "scripts") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049490 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049632 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh42q\" (UniqueName: \"kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049706 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049851 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049955 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.049997 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run\") pod \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\" (UID: \"ad36c03d-b2f2-48bf-92b0-f45dd39251b1\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.051029 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.052083 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.053486 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.054323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run" (OuterVolumeSpecName: "var-run") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.062161 4729 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.062829 4729 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.063232 4729 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.063472 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.064026 4729 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.064295 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv8r7\" (UniqueName: \"kubernetes.io/projected/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-kube-api-access-qv8r7\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.064313 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a65f219-5a7b-4c05-bf31-9faa4c7490a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.067180 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q" (OuterVolumeSpecName: "kube-api-access-lh42q") pod "ad36c03d-b2f2-48bf-92b0-f45dd39251b1" (UID: "ad36c03d-b2f2-48bf-92b0-f45dd39251b1"). InnerVolumeSpecName "kube-api-access-lh42q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.111334 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gk2cz-config-vc7pl"] Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.151799 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gk2cz-config-vc7pl"] Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.181318 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh42q\" (UniqueName: \"kubernetes.io/projected/ad36c03d-b2f2-48bf-92b0-f45dd39251b1-kube-api-access-lh42q\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.285815 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.386148 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts\") pod \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.386332 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw7sz\" (UniqueName: \"kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz\") pod \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\" (UID: \"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.387243 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" (UID: "3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.387611 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.391058 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.392208 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz" (OuterVolumeSpecName: "kube-api-access-fw7sz") pod "3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" (UID: "3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4"). InnerVolumeSpecName "kube-api-access-fw7sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.488912 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtzbc\" (UniqueName: \"kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc\") pod \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.489129 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts\") pod \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\" (UID: \"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc\") " Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.489762 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw7sz\" (UniqueName: \"kubernetes.io/projected/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4-kube-api-access-fw7sz\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.490178 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" (UID: "3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.497075 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc" (OuterVolumeSpecName: "kube-api-access-xtzbc") pod "3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" (UID: "3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc"). InnerVolumeSpecName "kube-api-access-xtzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.592377 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtzbc\" (UniqueName: \"kubernetes.io/projected/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-kube-api-access-xtzbc\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.592413 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:09 crc kubenswrapper[4729]: W0127 14:31:09.954118 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f2d8fdf_9710_4e95_a733_8ce7f61951eb.slice/crio-d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf WatchSource:0}: Error finding container d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf: Status 404 returned error can't find the container with id d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf Jan 27 14:31:09 crc kubenswrapper[4729]: I0127 14:31:09.954430 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mvsqd"] Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.093359 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad36c03d-b2f2-48bf-92b0-f45dd39251b1" path="/var/lib/kubelet/pods/ad36c03d-b2f2-48bf-92b0-f45dd39251b1/volumes" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.094565 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerStarted","Data":"a8389f09321884e19c4bd00ed6f9c014034ca08a9798247050f144c2c1972748"} Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.104245 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" event={"ID":"3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc","Type":"ContainerDied","Data":"c9b88a9af52a3461fe638d093e6de199de47a00806157632d140e8cab67ea731"} Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.104332 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b88a9af52a3461fe638d093e6de199de47a00806157632d140e8cab67ea731" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.104292 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-gt6kg" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.107644 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerStarted","Data":"dccbf84a424976db52668c4deacfb2af44ee3eb6ffeb958a89bd909f12d954ed"} Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.108538 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.113748 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" event={"ID":"3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4","Type":"ContainerDied","Data":"4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91"} Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.113824 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f795173ec1dc78a14c480022eb672358d434f9ea235ee40a037a4a4ae8bbe91" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.113782 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-d33f-account-create-update-jrz6t" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.120993 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mvsqd" event={"ID":"1f2d8fdf-9710-4e95-a733-8ce7f61951eb","Type":"ContainerStarted","Data":"d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf"} Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.138649 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=34.917338542 podStartE2EDuration="1m25.13863084s" podCreationTimestamp="2026-01-27 14:29:45 +0000 UTC" firstStartedPulling="2026-01-27 14:30:18.579044932 +0000 UTC m=+1505.163235936" lastFinishedPulling="2026-01-27 14:31:08.80033723 +0000 UTC m=+1555.384528234" observedRunningTime="2026-01-27 14:31:10.131222549 +0000 UTC m=+1556.715413573" watchObservedRunningTime="2026-01-27 14:31:10.13863084 +0000 UTC m=+1556.722821844" Jan 27 14:31:10 crc kubenswrapper[4729]: I0127 14:31:10.180431 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371944.674368 podStartE2EDuration="1m32.180407854s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:41.028233871 +0000 UTC m=+1467.612424875" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:10.172650303 +0000 UTC m=+1556.756841337" watchObservedRunningTime="2026-01-27 14:31:10.180407854 +0000 UTC m=+1556.764598868" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.137145 4729 generic.go:334] "Generic (PLEG): container finished" podID="fb94bfab-bf68-4e03-9a32-b4de4d765b1f" containerID="21cbba3b443fbaa5293304e2724938a0107380d81f34fa5d335fc8148ad266f6" exitCode=0 Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.137795 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z4znt" event={"ID":"fb94bfab-bf68-4e03-9a32-b4de4d765b1f","Type":"ContainerDied","Data":"21cbba3b443fbaa5293304e2724938a0107380d81f34fa5d335fc8148ad266f6"} Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.302929 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-74m95"] Jan 27 14:31:11 crc kubenswrapper[4729]: E0127 14:31:11.303430 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a65f219-5a7b-4c05-bf31-9faa4c7490a2" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303456 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a65f219-5a7b-4c05-bf31-9faa4c7490a2" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: E0127 14:31:11.303481 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" containerName="mariadb-database-create" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303490 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" containerName="mariadb-database-create" Jan 27 14:31:11 crc kubenswrapper[4729]: E0127 14:31:11.303511 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303519 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: E0127 14:31:11.303566 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad36c03d-b2f2-48bf-92b0-f45dd39251b1" containerName="ovn-config" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303575 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad36c03d-b2f2-48bf-92b0-f45dd39251b1" containerName="ovn-config" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303797 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" containerName="mariadb-database-create" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303822 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a65f219-5a7b-4c05-bf31-9faa4c7490a2" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303849 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" containerName="mariadb-account-create-update" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.303866 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad36c03d-b2f2-48bf-92b0-f45dd39251b1" containerName="ovn-config" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.305103 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.308506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.322396 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-74m95"] Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.340515 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz5gf\" (UniqueName: \"kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.340594 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.442610 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz5gf\" (UniqueName: \"kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.442743 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.443665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.460996 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz5gf\" (UniqueName: \"kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf\") pod \"root-account-create-update-74m95\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " pod="openstack/root-account-create-update-74m95" Jan 27 14:31:11 crc kubenswrapper[4729]: I0127 14:31:11.629495 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74m95" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.120369 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.156167 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-74m95"] Jan 27 14:31:12 crc kubenswrapper[4729]: W0127 14:31:12.161470 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcaaaad51_76f2_49f0_8dc4_6ad513a50327.slice/crio-63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700 WatchSource:0}: Error finding container 63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700: Status 404 returned error can't find the container with id 63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700 Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.676939 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.782545 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jdp6\" (UniqueName: \"kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.782701 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.782775 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.782913 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.783005 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.783086 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.783279 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts\") pod \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\" (UID: \"fb94bfab-bf68-4e03-9a32-b4de4d765b1f\") " Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.783615 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.784472 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.785196 4729 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.785210 4729 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.788763 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6" (OuterVolumeSpecName: "kube-api-access-4jdp6") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "kube-api-access-4jdp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.809649 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.810173 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.814990 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts" (OuterVolumeSpecName: "scripts") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.830733 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fb94bfab-bf68-4e03-9a32-b4de4d765b1f" (UID: "fb94bfab-bf68-4e03-9a32-b4de4d765b1f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.888934 4729 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.888978 4729 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.888991 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.889005 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:12 crc kubenswrapper[4729]: I0127 14:31:12.889018 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jdp6\" (UniqueName: \"kubernetes.io/projected/fb94bfab-bf68-4e03-9a32-b4de4d765b1f-kube-api-access-4jdp6\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.160617 4729 generic.go:334] "Generic (PLEG): container finished" podID="caaaad51-76f2-49f0-8dc4-6ad513a50327" containerID="c4e5b0a48fe01056a742d1756873e73978d307cff5ae9a0fa40fed45e085a408" exitCode=0 Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.161557 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74m95" event={"ID":"caaaad51-76f2-49f0-8dc4-6ad513a50327","Type":"ContainerDied","Data":"c4e5b0a48fe01056a742d1756873e73978d307cff5ae9a0fa40fed45e085a408"} Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.161618 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74m95" event={"ID":"caaaad51-76f2-49f0-8dc4-6ad513a50327","Type":"ContainerStarted","Data":"63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700"} Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.163800 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-z4znt" event={"ID":"fb94bfab-bf68-4e03-9a32-b4de4d765b1f","Type":"ContainerDied","Data":"a621ff7aede60b8b1ba1049c1c8b3efeae3b14b34f565afaea6cc0f645248b1c"} Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.163847 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a621ff7aede60b8b1ba1049c1c8b3efeae3b14b34f565afaea6cc0f645248b1c" Jan 27 14:31:13 crc kubenswrapper[4729]: I0127 14:31:13.164022 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-z4znt" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.176958 4729 generic.go:334] "Generic (PLEG): container finished" podID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerID="1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd" exitCode=0 Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.177065 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerDied","Data":"1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd"} Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.668059 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74m95" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.725488 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pg87j"] Jan 27 14:31:14 crc kubenswrapper[4729]: E0127 14:31:14.726431 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb94bfab-bf68-4e03-9a32-b4de4d765b1f" containerName="swift-ring-rebalance" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.726460 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb94bfab-bf68-4e03-9a32-b4de4d765b1f" containerName="swift-ring-rebalance" Jan 27 14:31:14 crc kubenswrapper[4729]: E0127 14:31:14.726506 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caaaad51-76f2-49f0-8dc4-6ad513a50327" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.726513 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="caaaad51-76f2-49f0-8dc4-6ad513a50327" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.727235 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb94bfab-bf68-4e03-9a32-b4de4d765b1f" containerName="swift-ring-rebalance" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.727267 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="caaaad51-76f2-49f0-8dc4-6ad513a50327" containerName="mariadb-account-create-update" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.728272 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.733373 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts\") pod \"caaaad51-76f2-49f0-8dc4-6ad513a50327\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.733507 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz5gf\" (UniqueName: \"kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf\") pod \"caaaad51-76f2-49f0-8dc4-6ad513a50327\" (UID: \"caaaad51-76f2-49f0-8dc4-6ad513a50327\") " Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.735789 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "caaaad51-76f2-49f0-8dc4-6ad513a50327" (UID: "caaaad51-76f2-49f0-8dc4-6ad513a50327"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.746475 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf" (OuterVolumeSpecName: "kube-api-access-fz5gf") pod "caaaad51-76f2-49f0-8dc4-6ad513a50327" (UID: "caaaad51-76f2-49f0-8dc4-6ad513a50327"). InnerVolumeSpecName "kube-api-access-fz5gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.759692 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pg87j"] Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.838394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfr6n\" (UniqueName: \"kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.838512 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.838583 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/caaaad51-76f2-49f0-8dc4-6ad513a50327-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.838602 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz5gf\" (UniqueName: \"kubernetes.io/projected/caaaad51-76f2-49f0-8dc4-6ad513a50327-kube-api-access-fz5gf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.927365 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-637a-account-create-update-hxzgl"] Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.928658 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.936334 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.940257 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22gv\" (UniqueName: \"kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.940339 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.940390 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.940580 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfr6n\" (UniqueName: \"kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.941376 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.949314 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-637a-account-create-update-hxzgl"] Jan 27 14:31:14 crc kubenswrapper[4729]: I0127 14:31:14.962146 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfr6n\" (UniqueName: \"kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n\") pod \"mysqld-exporter-openstack-cell1-db-create-pg87j\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.024013 4729 scope.go:117] "RemoveContainer" containerID="7839d7cbe95928236a0f448e10b3de55844205d847b17d72425d950f3ead868e" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.042476 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22gv\" (UniqueName: \"kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.042545 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.043236 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.050947 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:31:15 crc kubenswrapper[4729]: E0127 14:31:15.051416 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.056201 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.062779 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22gv\" (UniqueName: \"kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv\") pod \"mysqld-exporter-637a-account-create-update-hxzgl\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.190187 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-74m95" event={"ID":"caaaad51-76f2-49f0-8dc4-6ad513a50327","Type":"ContainerDied","Data":"63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700"} Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.190227 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63018b8a99d340139fa628b1427a4476f39f523d81bd5da9cdabff382f0ce700" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.190228 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-74m95" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.252567 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.779412 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pg87j"] Jan 27 14:31:15 crc kubenswrapper[4729]: W0127 14:31:15.787573 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc484e844_15ff_4e38_8b3d_95cdbbc29fdf.slice/crio-0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35 WatchSource:0}: Error finding container 0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35: Status 404 returned error can't find the container with id 0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35 Jan 27 14:31:15 crc kubenswrapper[4729]: I0127 14:31:15.966989 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-637a-account-create-update-hxzgl"] Jan 27 14:31:15 crc kubenswrapper[4729]: W0127 14:31:15.975277 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6072b2_cd99_4198_ac75_97a21048aaa9.slice/crio-d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299 WatchSource:0}: Error finding container d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299: Status 404 returned error can't find the container with id d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299 Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.204602 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerStarted","Data":"9e1a560e097c137694bb9d2b51a4a3e63b94f1421aa80b5afa787aed36c0174b"} Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.204888 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.208502 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" event={"ID":"eb6072b2-cd99-4198-ac75-97a21048aaa9","Type":"ContainerStarted","Data":"6ff7730e5c08394546af83857c301aadb27cd591cc604c90f8e9a8577b25a5ba"} Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.208554 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" event={"ID":"eb6072b2-cd99-4198-ac75-97a21048aaa9","Type":"ContainerStarted","Data":"d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299"} Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.212449 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" event={"ID":"c484e844-15ff-4e38-8b3d-95cdbbc29fdf","Type":"ContainerStarted","Data":"52be79555f5a759f3b188732504cdd53df1eff0aa0dd772102fb8d92534cef46"} Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.212489 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" event={"ID":"c484e844-15ff-4e38-8b3d-95cdbbc29fdf","Type":"ContainerStarted","Data":"0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35"} Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.241239 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371938.613558 podStartE2EDuration="1m38.241218085s" podCreationTimestamp="2026-01-27 14:29:38 +0000 UTC" firstStartedPulling="2026-01-27 14:29:41.784710052 +0000 UTC m=+1468.368901056" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:16.238369448 +0000 UTC m=+1562.822560442" watchObservedRunningTime="2026-01-27 14:31:16.241218085 +0000 UTC m=+1562.825409089" Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.258703 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" podStartSLOduration=2.25868544 podStartE2EDuration="2.25868544s" podCreationTimestamp="2026-01-27 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:16.251219658 +0000 UTC m=+1562.835410662" watchObservedRunningTime="2026-01-27 14:31:16.25868544 +0000 UTC m=+1562.842876444" Jan 27 14:31:16 crc kubenswrapper[4729]: I0127 14:31:16.279329 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" podStartSLOduration=2.27930991 podStartE2EDuration="2.27930991s" podCreationTimestamp="2026-01-27 14:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:31:16.275101846 +0000 UTC m=+1562.859292860" watchObservedRunningTime="2026-01-27 14:31:16.27930991 +0000 UTC m=+1562.863500914" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.088518 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.091610 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.224315 4729 generic.go:334] "Generic (PLEG): container finished" podID="eb6072b2-cd99-4198-ac75-97a21048aaa9" containerID="6ff7730e5c08394546af83857c301aadb27cd591cc604c90f8e9a8577b25a5ba" exitCode=0 Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.224400 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" event={"ID":"eb6072b2-cd99-4198-ac75-97a21048aaa9","Type":"ContainerDied","Data":"6ff7730e5c08394546af83857c301aadb27cd591cc604c90f8e9a8577b25a5ba"} Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.228598 4729 generic.go:334] "Generic (PLEG): container finished" podID="c484e844-15ff-4e38-8b3d-95cdbbc29fdf" containerID="52be79555f5a759f3b188732504cdd53df1eff0aa0dd772102fb8d92534cef46" exitCode=0 Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.229053 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" event={"ID":"c484e844-15ff-4e38-8b3d-95cdbbc29fdf","Type":"ContainerDied","Data":"52be79555f5a759f3b188732504cdd53df1eff0aa0dd772102fb8d92534cef46"} Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.231163 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.715098 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.731643 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/399f6c9f-a3d5-4235-bce9-f3623e6be7f4-etc-swift\") pod \"swift-storage-0\" (UID: \"399f6c9f-a3d5-4235-bce9-f3623e6be7f4\") " pod="openstack/swift-storage-0" Jan 27 14:31:17 crc kubenswrapper[4729]: I0127 14:31:17.882013 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.843497 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.125:5671: connect: connection refused" Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.902208 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.902594 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="thanos-sidecar" containerID="cri-o://a8389f09321884e19c4bd00ed6f9c014034ca08a9798247050f144c2c1972748" gracePeriod=600 Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.902661 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="config-reloader" containerID="cri-o://5a91e205df2de1035e3bab27ad791de5d5a5303cbea68d0070693cea5ac8636d" gracePeriod=600 Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.902521 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="prometheus" containerID="cri-o://a0c233716c4a01787aefd955aeb3845c44cbcc26c5938224e84faa71f80af099" gracePeriod=600 Jan 27 14:31:19 crc kubenswrapper[4729]: I0127 14:31:19.964427 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.186821 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264655 4729 generic.go:334] "Generic (PLEG): container finished" podID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerID="a8389f09321884e19c4bd00ed6f9c014034ca08a9798247050f144c2c1972748" exitCode=0 Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264691 4729 generic.go:334] "Generic (PLEG): container finished" podID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerID="5a91e205df2de1035e3bab27ad791de5d5a5303cbea68d0070693cea5ac8636d" exitCode=0 Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264699 4729 generic.go:334] "Generic (PLEG): container finished" podID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerID="a0c233716c4a01787aefd955aeb3845c44cbcc26c5938224e84faa71f80af099" exitCode=0 Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264719 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerDied","Data":"a8389f09321884e19c4bd00ed6f9c014034ca08a9798247050f144c2c1972748"} Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264743 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerDied","Data":"5a91e205df2de1035e3bab27ad791de5d5a5303cbea68d0070693cea5ac8636d"} Jan 27 14:31:20 crc kubenswrapper[4729]: I0127 14:31:20.264752 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerDied","Data":"a0c233716c4a01787aefd955aeb3845c44cbcc26c5938224e84faa71f80af099"} Jan 27 14:31:20 crc kubenswrapper[4729]: E0127 14:31:20.832694 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:22 crc kubenswrapper[4729]: I0127 14:31:22.090490 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.135:9090/-/ready\": dial tcp 10.217.0.135:9090: connect: connection refused" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.923023 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.925164 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.979279 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts\") pod \"eb6072b2-cd99-4198-ac75-97a21048aaa9\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.979367 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfr6n\" (UniqueName: \"kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n\") pod \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.979413 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22gv\" (UniqueName: \"kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv\") pod \"eb6072b2-cd99-4198-ac75-97a21048aaa9\" (UID: \"eb6072b2-cd99-4198-ac75-97a21048aaa9\") " Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.979618 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts\") pod \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\" (UID: \"c484e844-15ff-4e38-8b3d-95cdbbc29fdf\") " Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.980393 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb6072b2-cd99-4198-ac75-97a21048aaa9" (UID: "eb6072b2-cd99-4198-ac75-97a21048aaa9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.980444 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c484e844-15ff-4e38-8b3d-95cdbbc29fdf" (UID: "c484e844-15ff-4e38-8b3d-95cdbbc29fdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.986744 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv" (OuterVolumeSpecName: "kube-api-access-k22gv") pod "eb6072b2-cd99-4198-ac75-97a21048aaa9" (UID: "eb6072b2-cd99-4198-ac75-97a21048aaa9"). InnerVolumeSpecName "kube-api-access-k22gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:24 crc kubenswrapper[4729]: I0127 14:31:24.986813 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n" (OuterVolumeSpecName: "kube-api-access-xfr6n") pod "c484e844-15ff-4e38-8b3d-95cdbbc29fdf" (UID: "c484e844-15ff-4e38-8b3d-95cdbbc29fdf"). InnerVolumeSpecName "kube-api-access-xfr6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.084773 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.085055 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb6072b2-cd99-4198-ac75-97a21048aaa9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.085065 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfr6n\" (UniqueName: \"kubernetes.io/projected/c484e844-15ff-4e38-8b3d-95cdbbc29fdf-kube-api-access-xfr6n\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.085076 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22gv\" (UniqueName: \"kubernetes.io/projected/eb6072b2-cd99-4198-ac75-97a21048aaa9-kube-api-access-k22gv\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.309698 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.309701 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pg87j" event={"ID":"c484e844-15ff-4e38-8b3d-95cdbbc29fdf","Type":"ContainerDied","Data":"0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35"} Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.309866 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e2a11c0ca5bcf458ba099f7609a0717f34959fe0148b539bbac1c58ce53cb35" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.311634 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" event={"ID":"eb6072b2-cd99-4198-ac75-97a21048aaa9","Type":"ContainerDied","Data":"d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299"} Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.311682 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6888fe7bd41a2e106a9f3510d8056e65a47ffde55ce4909f2be240547aac299" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.311771 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-637a-account-create-update-hxzgl" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.353590 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:31:25 crc kubenswrapper[4729]: E0127 14:31:25.354026 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6072b2-cd99-4198-ac75-97a21048aaa9" containerName="mariadb-account-create-update" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.354043 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6072b2-cd99-4198-ac75-97a21048aaa9" containerName="mariadb-account-create-update" Jan 27 14:31:25 crc kubenswrapper[4729]: E0127 14:31:25.354055 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c484e844-15ff-4e38-8b3d-95cdbbc29fdf" containerName="mariadb-database-create" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.354062 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c484e844-15ff-4e38-8b3d-95cdbbc29fdf" containerName="mariadb-database-create" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.355396 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c484e844-15ff-4e38-8b3d-95cdbbc29fdf" containerName="mariadb-database-create" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.355424 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6072b2-cd99-4198-ac75-97a21048aaa9" containerName="mariadb-account-create-update" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.356783 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.394197 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.394589 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.394684 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjtw\" (UniqueName: \"kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.395007 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:31:25 crc kubenswrapper[4729]: W0127 14:31:25.452478 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod399f6c9f_a3d5_4235_bce9_f3623e6be7f4.slice/crio-dd3b62646a00aa6377eaf1964f657ba7e211214037b77942fada5ff00b5268c6 WatchSource:0}: Error finding container dd3b62646a00aa6377eaf1964f657ba7e211214037b77942fada5ff00b5268c6: Status 404 returned error can't find the container with id dd3b62646a00aa6377eaf1964f657ba7e211214037b77942fada5ff00b5268c6 Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.453856 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.496538 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.496936 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.497060 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjtw\" (UniqueName: \"kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.497747 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.498006 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.520559 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjtw\" (UniqueName: \"kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw\") pod \"redhat-operators-rp4xd\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.681515 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.702274 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.803963 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.804032 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.804092 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62qcf\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.804115 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.805373 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.807643 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.807923 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.808026 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.808100 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.808130 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.808221 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2\") pod \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\" (UID: \"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8\") " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.809512 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.808862 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.814909 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config" (OuterVolumeSpecName: "config") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.814955 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.815004 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out" (OuterVolumeSpecName: "config-out") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.815043 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf" (OuterVolumeSpecName: "kube-api-access-62qcf") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "kube-api-access-62qcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.815233 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.816963 4729 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.817020 4729 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.817036 4729 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.831791 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.840104 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config" (OuterVolumeSpecName: "web-config") pod "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" (UID: "adf6ced5-b7ed-4e10-b746-2f5ae329f4d8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919174 4729 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919218 4729 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config-out\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919231 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62qcf\" (UniqueName: \"kubernetes.io/projected/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-kube-api-access-62qcf\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919243 4729 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919342 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") on node \"crc\" " Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919362 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.919378 4729 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8-web-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.946570 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:31:25 crc kubenswrapper[4729]: I0127 14:31:25.946769 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310") on node "crc" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.023501 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") on node \"crc\" DevicePath \"\"" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.194282 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:31:26 crc kubenswrapper[4729]: W0127 14:31:26.195501 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccd370a7_8144_4880_8412_c55559238c41.slice/crio-ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c WatchSource:0}: Error finding container ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c: Status 404 returned error can't find the container with id ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.322065 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerStarted","Data":"ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c"} Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.325122 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"adf6ced5-b7ed-4e10-b746-2f5ae329f4d8","Type":"ContainerDied","Data":"85d93cbed5b412af967c03d1561e541b4bcf5394c261bdd7a46db5c90c2be680"} Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.325197 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.325208 4729 scope.go:117] "RemoveContainer" containerID="a8389f09321884e19c4bd00ed6f9c014034ca08a9798247050f144c2c1972748" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.329904 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"dd3b62646a00aa6377eaf1964f657ba7e211214037b77942fada5ff00b5268c6"} Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.361979 4729 scope.go:117] "RemoveContainer" containerID="5a91e205df2de1035e3bab27ad791de5d5a5303cbea68d0070693cea5ac8636d" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.373302 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.385521 4729 scope.go:117] "RemoveContainer" containerID="a0c233716c4a01787aefd955aeb3845c44cbcc26c5938224e84faa71f80af099" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.398998 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.408260 4729 scope.go:117] "RemoveContainer" containerID="e66d6a69fb86ffc46d6cd6f55bc26898855d203774774300de53a1b55587a1e2" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.414107 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4729]: E0127 14:31:26.414656 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.414683 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4729]: E0127 14:31:26.414719 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="init-config-reloader" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.414728 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="init-config-reloader" Jan 27 14:31:26 crc kubenswrapper[4729]: E0127 14:31:26.414766 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.414774 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4729]: E0127 14:31:26.414785 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.414793 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.415047 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="thanos-sidecar" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.415069 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="config-reloader" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.415108 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" containerName="prometheus" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.417473 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431391 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431457 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431658 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431674 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431778 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2dbph" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.431659 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.433222 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.438485 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.442248 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.447390 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533362 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533453 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533490 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533548 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533671 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533719 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533819 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533840 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533895 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.533914 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.534021 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.534087 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/579561e5-ea93-4bb7-bf73-5107d60a62b9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.534150 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9kp\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-kube-api-access-hh9kp\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639390 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639674 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639752 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639793 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639870 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639907 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639931 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.639989 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.640024 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/579561e5-ea93-4bb7-bf73-5107d60a62b9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.640063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9kp\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-kube-api-access-hh9kp\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.640141 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.647907 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.648473 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.648567 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.649203 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.651315 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/579561e5-ea93-4bb7-bf73-5107d60a62b9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.656111 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.657448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.657544 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.664304 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.666869 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.666944 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0b21704c3f0d71a2fe30bdd41c3791c72542ca1dfc4c58962ce47cac47fe929c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.667743 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/579561e5-ea93-4bb7-bf73-5107d60a62b9-config\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.671127 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/579561e5-ea93-4bb7-bf73-5107d60a62b9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.672212 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9kp\" (UniqueName: \"kubernetes.io/projected/579561e5-ea93-4bb7-bf73-5107d60a62b9-kube-api-access-hh9kp\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.712391 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6052c0b7-d399-4bcc-9ec9-29aae709f310\") pod \"prometheus-metric-storage-0\" (UID: \"579561e5-ea93-4bb7-bf73-5107d60a62b9\") " pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:26 crc kubenswrapper[4729]: E0127 14:31:26.732776 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:26 crc kubenswrapper[4729]: I0127 14:31:26.794776 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 27 14:31:27 crc kubenswrapper[4729]: E0127 14:31:27.265007 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 27 14:31:27 crc kubenswrapper[4729]: E0127 14:31:27.265456 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6dbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mvsqd_openstack(1f2d8fdf-9710-4e95-a733-8ce7f61951eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:31:27 crc kubenswrapper[4729]: E0127 14:31:27.266637 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mvsqd" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" Jan 27 14:31:27 crc kubenswrapper[4729]: I0127 14:31:27.343405 4729 generic.go:334] "Generic (PLEG): container finished" podID="ccd370a7-8144-4880-8412-c55559238c41" containerID="9680fb622bd803d2de59a7fddc3a1d506360aaa5f37f9deedc39d8d81bd12c65" exitCode=0 Jan 27 14:31:27 crc kubenswrapper[4729]: I0127 14:31:27.343514 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerDied","Data":"9680fb622bd803d2de59a7fddc3a1d506360aaa5f37f9deedc39d8d81bd12c65"} Jan 27 14:31:27 crc kubenswrapper[4729]: E0127 14:31:27.345191 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-mvsqd" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" Jan 27 14:31:27 crc kubenswrapper[4729]: I0127 14:31:27.394150 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 27 14:31:28 crc kubenswrapper[4729]: I0127 14:31:28.069634 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf6ced5-b7ed-4e10-b746-2f5ae329f4d8" path="/var/lib/kubelet/pods/adf6ced5-b7ed-4e10-b746-2f5ae329f4d8/volumes" Jan 27 14:31:28 crc kubenswrapper[4729]: I0127 14:31:28.354217 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerStarted","Data":"d984a747c2ff5eb32c20f125a835bf5fcbfcfa656fd7215ce65ca0a90b3c69ff"} Jan 27 14:31:29 crc kubenswrapper[4729]: I0127 14:31:29.051607 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:31:29 crc kubenswrapper[4729]: E0127 14:31:29.052308 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:31:29 crc kubenswrapper[4729]: I0127 14:31:29.365176 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerStarted","Data":"b1157d29a5982f84630323de4c4399db2961b1582d1c2aa36ee6cae42f0e9f6d"} Jan 27 14:31:29 crc kubenswrapper[4729]: I0127 14:31:29.837819 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.125:5671: connect: connection refused" Jan 27 14:31:29 crc kubenswrapper[4729]: I0127 14:31:29.962591 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.138661 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.141511 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.151654 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.153055 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.182911 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.218239 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.218545 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.218856 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69rng\" (UniqueName: \"kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.321012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.321112 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.321177 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69rng\" (UniqueName: \"kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.332098 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.337700 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.339972 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69rng\" (UniqueName: \"kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng\") pod \"mysqld-exporter-0\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.377397 4729 generic.go:334] "Generic (PLEG): container finished" podID="ccd370a7-8144-4880-8412-c55559238c41" containerID="b1157d29a5982f84630323de4c4399db2961b1582d1c2aa36ee6cae42f0e9f6d" exitCode=0 Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.377477 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerDied","Data":"b1157d29a5982f84630323de4c4399db2961b1582d1c2aa36ee6cae42f0e9f6d"} Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.382135 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"87482fb348c5708945d231df91f6c1cf7b1a4af24f9cf2f69cd42cc03ebb6208"} Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.479439 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:31:30 crc kubenswrapper[4729]: I0127 14:31:30.825098 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:31:30 crc kubenswrapper[4729]: E0127 14:31:30.923396 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:31 crc kubenswrapper[4729]: I0127 14:31:31.029146 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:31:31 crc kubenswrapper[4729]: W0127 14:31:31.029562 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45271248_b433_4451_8ee0_30c55a37e285.slice/crio-a39ccbb18cf3effd188b3e195776b1e5ec3a7c05963d19294c2ce40e564593ed WatchSource:0}: Error finding container a39ccbb18cf3effd188b3e195776b1e5ec3a7c05963d19294c2ce40e564593ed: Status 404 returned error can't find the container with id a39ccbb18cf3effd188b3e195776b1e5ec3a7c05963d19294c2ce40e564593ed Jan 27 14:31:31 crc kubenswrapper[4729]: I0127 14:31:31.393393 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"cfc6e423e1265e4717ed2ac104bbf9f2e24f7e26411a15c17bab9e0030a8dc3f"} Jan 27 14:31:31 crc kubenswrapper[4729]: I0127 14:31:31.394818 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"45271248-b433-4451-8ee0-30c55a37e285","Type":"ContainerStarted","Data":"a39ccbb18cf3effd188b3e195776b1e5ec3a7c05963d19294c2ce40e564593ed"} Jan 27 14:31:31 crc kubenswrapper[4729]: I0127 14:31:31.396373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerStarted","Data":"6d88c6f2bd7abda69e802198248596a40fd986ff2da6a96a8a863608a960af0c"} Jan 27 14:31:32 crc kubenswrapper[4729]: I0127 14:31:32.408776 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"5c817bdcc048ff31c87123f69c81d64bc1c39aef732515343fd326233dab0484"} Jan 27 14:31:39 crc kubenswrapper[4729]: I0127 14:31:39.839988 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:31:39 crc kubenswrapper[4729]: I0127 14:31:39.962762 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:31:40 crc kubenswrapper[4729]: I0127 14:31:40.182807 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:31:41 crc kubenswrapper[4729]: I0127 14:31:41.051685 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:31:41 crc kubenswrapper[4729]: E0127 14:31:41.052086 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:31:41 crc kubenswrapper[4729]: E0127 14:31:41.219764 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:41 crc kubenswrapper[4729]: E0127 14:31:41.484066 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:44 crc kubenswrapper[4729]: I0127 14:31:44.522659 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"576319a8f926f0fab62de862d1e715df0d3230316047c7e28ac1aac4cc02bc23"} Jan 27 14:31:45 crc kubenswrapper[4729]: I0127 14:31:45.532863 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerStarted","Data":"f1f7c16478d54dd23e814d41deeb7510904ec075a7563052eb1273d6a22a4eae"} Jan 27 14:31:46 crc kubenswrapper[4729]: I0127 14:31:46.568604 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rp4xd" podStartSLOduration=4.061721095 podStartE2EDuration="21.568584266s" podCreationTimestamp="2026-01-27 14:31:25 +0000 UTC" firstStartedPulling="2026-01-27 14:31:27.345140744 +0000 UTC m=+1573.929331738" lastFinishedPulling="2026-01-27 14:31:44.852003905 +0000 UTC m=+1591.436194909" observedRunningTime="2026-01-27 14:31:46.560900587 +0000 UTC m=+1593.145091611" watchObservedRunningTime="2026-01-27 14:31:46.568584266 +0000 UTC m=+1593.152775280" Jan 27 14:31:48 crc kubenswrapper[4729]: E0127 14:31:48.223825 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:48 crc kubenswrapper[4729]: E0127 14:31:48.224077 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:49 crc kubenswrapper[4729]: I0127 14:31:49.572982 4729 generic.go:334] "Generic (PLEG): container finished" podID="579561e5-ea93-4bb7-bf73-5107d60a62b9" containerID="6d88c6f2bd7abda69e802198248596a40fd986ff2da6a96a8a863608a960af0c" exitCode=0 Jan 27 14:31:49 crc kubenswrapper[4729]: I0127 14:31:49.573063 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerDied","Data":"6d88c6f2bd7abda69e802198248596a40fd986ff2da6a96a8a863608a960af0c"} Jan 27 14:31:49 crc kubenswrapper[4729]: I0127 14:31:49.963748 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:31:50 crc kubenswrapper[4729]: I0127 14:31:50.183919 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:31:51 crc kubenswrapper[4729]: E0127 14:31:51.263416 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:52 crc kubenswrapper[4729]: I0127 14:31:52.051972 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:31:52 crc kubenswrapper[4729]: E0127 14:31:52.052304 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.149330 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.158497 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.204434 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.326164 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.326339 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.326374 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gzcg\" (UniqueName: \"kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.428075 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.428227 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.428262 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gzcg\" (UniqueName: \"kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.428926 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.429257 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.453501 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gzcg\" (UniqueName: \"kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg\") pod \"community-operators-lvv5h\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.505770 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:31:53 crc kubenswrapper[4729]: I0127 14:31:53.642889 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerStarted","Data":"05b99751fcc7f1ea1ff8b11d6295c28eb95a076828cf98fee27d178211d32228"} Jan 27 14:31:54 crc kubenswrapper[4729]: I0127 14:31:54.095442 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:31:54 crc kubenswrapper[4729]: I0127 14:31:54.652156 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerStarted","Data":"2dd0d97540d8950d76aaa6c2773dbfe0c242223f396465382fdbe850626ca06a"} Jan 27 14:31:55 crc kubenswrapper[4729]: I0127 14:31:55.663603 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mvsqd" event={"ID":"1f2d8fdf-9710-4e95-a733-8ce7f61951eb","Type":"ContainerStarted","Data":"b2b782c35a7c78c09c5bd3b2ae2b768ed5bf1923baed5a33e7aee56cb34c2893"} Jan 27 14:31:55 crc kubenswrapper[4729]: I0127 14:31:55.681986 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:55 crc kubenswrapper[4729]: I0127 14:31:55.682405 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:55 crc kubenswrapper[4729]: I0127 14:31:55.733614 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:56 crc kubenswrapper[4729]: I0127 14:31:56.673811 4729 generic.go:334] "Generic (PLEG): container finished" podID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerID="c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b" exitCode=0 Jan 27 14:31:56 crc kubenswrapper[4729]: I0127 14:31:56.675101 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerDied","Data":"c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b"} Jan 27 14:31:56 crc kubenswrapper[4729]: E0127 14:31:56.686191 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:31:56 crc kubenswrapper[4729]: I0127 14:31:56.734703 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:31:56 crc kubenswrapper[4729]: I0127 14:31:56.738843 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mvsqd" podStartSLOduration=8.019507485 podStartE2EDuration="48.738825202s" podCreationTimestamp="2026-01-27 14:31:08 +0000 UTC" firstStartedPulling="2026-01-27 14:31:09.956111584 +0000 UTC m=+1556.540302588" lastFinishedPulling="2026-01-27 14:31:50.675429301 +0000 UTC m=+1597.259620305" observedRunningTime="2026-01-27 14:31:56.725317286 +0000 UTC m=+1603.309508300" watchObservedRunningTime="2026-01-27 14:31:56.738825202 +0000 UTC m=+1603.323016206" Jan 27 14:31:58 crc kubenswrapper[4729]: I0127 14:31:58.311779 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:31:58 crc kubenswrapper[4729]: I0127 14:31:58.691421 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rp4xd" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="registry-server" containerID="cri-o://f1f7c16478d54dd23e814d41deeb7510904ec075a7563052eb1273d6a22a4eae" gracePeriod=2 Jan 27 14:31:59 crc kubenswrapper[4729]: I0127 14:31:59.703730 4729 generic.go:334] "Generic (PLEG): container finished" podID="ccd370a7-8144-4880-8412-c55559238c41" containerID="f1f7c16478d54dd23e814d41deeb7510904ec075a7563052eb1273d6a22a4eae" exitCode=0 Jan 27 14:31:59 crc kubenswrapper[4729]: I0127 14:31:59.703771 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerDied","Data":"f1f7c16478d54dd23e814d41deeb7510904ec075a7563052eb1273d6a22a4eae"} Jan 27 14:31:59 crc kubenswrapper[4729]: I0127 14:31:59.707436 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerStarted","Data":"931353c22a3017d1e8190d0465fe1815803b7079b26d936338dece187784f845"} Jan 27 14:31:59 crc kubenswrapper[4729]: I0127 14:31:59.962834 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:32:00 crc kubenswrapper[4729]: I0127 14:32:00.183996 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:32:01 crc kubenswrapper[4729]: E0127 14:32:01.308642 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:32:02 crc kubenswrapper[4729]: E0127 14:32:02.485016 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-container:current-podified" Jan 27 14:32:02 crc kubenswrapper[4729]: E0127 14:32:02.485392 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-server,Image:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,Command:[/usr/bin/swift-container-server /etc/swift/container-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:container,HostPort:0,ContainerPort:6201,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n668h54bhc8h696h64dh576h589hd8hd7h594h668h664h59hc8h58h556h5b7h595h67fh58dhd4h55fh5h5c4hfdh5f7h549h5bdh584h56h85h8fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj7sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(399f6c9f-a3d5-4235-bce9-f3623e6be7f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.740492 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rp4xd" event={"ID":"ccd370a7-8144-4880-8412-c55559238c41","Type":"ContainerDied","Data":"ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c"} Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.740538 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab79c0131a2cc5a42568ce87911d21414c21f9fe06e2861a0317cd30fde96e1c" Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.858593 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.969892 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpjtw\" (UniqueName: \"kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw\") pod \"ccd370a7-8144-4880-8412-c55559238c41\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.969971 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities\") pod \"ccd370a7-8144-4880-8412-c55559238c41\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.970199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content\") pod \"ccd370a7-8144-4880-8412-c55559238c41\" (UID: \"ccd370a7-8144-4880-8412-c55559238c41\") " Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.975449 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities" (OuterVolumeSpecName: "utilities") pod "ccd370a7-8144-4880-8412-c55559238c41" (UID: "ccd370a7-8144-4880-8412-c55559238c41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:02 crc kubenswrapper[4729]: I0127 14:32:02.983081 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw" (OuterVolumeSpecName: "kube-api-access-qpjtw") pod "ccd370a7-8144-4880-8412-c55559238c41" (UID: "ccd370a7-8144-4880-8412-c55559238c41"). InnerVolumeSpecName "kube-api-access-qpjtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.073021 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.073080 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpjtw\" (UniqueName: \"kubernetes.io/projected/ccd370a7-8144-4880-8412-c55559238c41-kube-api-access-qpjtw\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.110305 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccd370a7-8144-4880-8412-c55559238c41" (UID: "ccd370a7-8144-4880-8412-c55559238c41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.175255 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccd370a7-8144-4880-8412-c55559238c41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.753753 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rp4xd" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.753748 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"579561e5-ea93-4bb7-bf73-5107d60a62b9","Type":"ContainerStarted","Data":"9a76d6bbf1af56165ee6fa6b833c5b45dcfa2801f8dccbd4801ec1d75f8d1d47"} Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.786552 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=37.786535901 podStartE2EDuration="37.786535901s" podCreationTimestamp="2026-01-27 14:31:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:03.783578371 +0000 UTC m=+1610.367769385" watchObservedRunningTime="2026-01-27 14:32:03.786535901 +0000 UTC m=+1610.370726905" Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.804646 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:32:03 crc kubenswrapper[4729]: I0127 14:32:03.818353 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rp4xd"] Jan 27 14:32:04 crc kubenswrapper[4729]: I0127 14:32:04.085696 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccd370a7-8144-4880-8412-c55559238c41" path="/var/lib/kubelet/pods/ccd370a7-8144-4880-8412-c55559238c41/volumes" Jan 27 14:32:04 crc kubenswrapper[4729]: I0127 14:32:04.770859 4729 generic.go:334] "Generic (PLEG): container finished" podID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerID="01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1" exitCode=0 Jan 27 14:32:04 crc kubenswrapper[4729]: I0127 14:32:04.770970 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerDied","Data":"01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1"} Jan 27 14:32:04 crc kubenswrapper[4729]: I0127 14:32:04.772951 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"45271248-b433-4451-8ee0-30c55a37e285","Type":"ContainerStarted","Data":"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f"} Jan 27 14:32:04 crc kubenswrapper[4729]: I0127 14:32:04.813321 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.54545479 podStartE2EDuration="34.813302711s" podCreationTimestamp="2026-01-27 14:31:30 +0000 UTC" firstStartedPulling="2026-01-27 14:31:31.046520839 +0000 UTC m=+1577.630711843" lastFinishedPulling="2026-01-27 14:32:03.31436875 +0000 UTC m=+1609.898559764" observedRunningTime="2026-01-27 14:32:04.812724665 +0000 UTC m=+1611.396915699" watchObservedRunningTime="2026-01-27 14:32:04.813302711 +0000 UTC m=+1611.397493715" Jan 27 14:32:05 crc kubenswrapper[4729]: I0127 14:32:05.050983 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:32:05 crc kubenswrapper[4729]: E0127 14:32:05.051270 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.794950 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.801433 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerStarted","Data":"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2"} Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.814338 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"ce7f3eaf2062235ab194565c639ff4f69cc471cfb8ffd8f347a20f76e82d5581"} Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.814398 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"d16a897d75c561d13c75f51191180dd419ed278ea3a290491296f2980e1d84e2"} Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.814413 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"f626c19e1b9c226388c426c150cddc3a7743a6094f600007e53bbfdeff278ec2"} Jan 27 14:32:06 crc kubenswrapper[4729]: I0127 14:32:06.824200 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvv5h" podStartSLOduration=5.411110492 podStartE2EDuration="13.824177104s" podCreationTimestamp="2026-01-27 14:31:53 +0000 UTC" firstStartedPulling="2026-01-27 14:31:58.027370572 +0000 UTC m=+1604.611561586" lastFinishedPulling="2026-01-27 14:32:06.440437194 +0000 UTC m=+1613.024628198" observedRunningTime="2026-01-27 14:32:06.818817586 +0000 UTC m=+1613.403008590" watchObservedRunningTime="2026-01-27 14:32:06.824177104 +0000 UTC m=+1613.408368128" Jan 27 14:32:07 crc kubenswrapper[4729]: I0127 14:32:07.838118 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"30e435986f4cba0dc7532128dec8e6741d1fa12e0e3a399bab11e8b0786c9f57"} Jan 27 14:32:09 crc kubenswrapper[4729]: I0127 14:32:09.933530 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"8679e4c1174485643c6320b93bbfa7afc19d7ea712c597e6de43ca2ada5c20f7"} Jan 27 14:32:09 crc kubenswrapper[4729]: I0127 14:32:09.967897 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 14:32:10 crc kubenswrapper[4729]: I0127 14:32:10.182956 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:32:11 crc kubenswrapper[4729]: E0127 14:32:11.540724 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:32:11 crc kubenswrapper[4729]: E0127 14:32:11.542470 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cfdd20_ad90_472d_8962_6bec29b3fa74.slice/crio-conmon-1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:32:11 crc kubenswrapper[4729]: I0127 14:32:11.795202 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:11 crc kubenswrapper[4729]: I0127 14:32:11.804647 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:11 crc kubenswrapper[4729]: I0127 14:32:11.955752 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 27 14:32:12 crc kubenswrapper[4729]: I0127 14:32:12.984090 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"ca75e500cf37b1c891bdd9b9a3c3f5f09f4c28d27435a07c44b6c9f3efdbc5b2"} Jan 27 14:32:13 crc kubenswrapper[4729]: I0127 14:32:13.507237 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:13 crc kubenswrapper[4729]: I0127 14:32:13.507307 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:13 crc kubenswrapper[4729]: I0127 14:32:13.555804 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:14 crc kubenswrapper[4729]: I0127 14:32:14.043597 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:14 crc kubenswrapper[4729]: I0127 14:32:14.097417 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:32:15 crc kubenswrapper[4729]: I0127 14:32:15.004614 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"eefb15b1b44b9d3e9ce44d5958d9e6755dbfcd8ae756a870291d5e8aa3aec83a"} Jan 27 14:32:15 crc kubenswrapper[4729]: E0127 14:32:15.961789 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container:current-podified\\\"\"]" pod="openstack/swift-storage-0" podUID="399f6c9f-a3d5-4235-bce9-f3623e6be7f4" Jan 27 14:32:16 crc kubenswrapper[4729]: I0127 14:32:16.012694 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvv5h" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="registry-server" containerID="cri-o://c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2" gracePeriod=2 Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.532823 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.578763 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gzcg\" (UniqueName: \"kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg\") pod \"5c7e2613-4511-4820-8f3b-9d48840b780f\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.578981 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities\") pod \"5c7e2613-4511-4820-8f3b-9d48840b780f\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.579020 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content\") pod \"5c7e2613-4511-4820-8f3b-9d48840b780f\" (UID: \"5c7e2613-4511-4820-8f3b-9d48840b780f\") " Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.582087 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities" (OuterVolumeSpecName: "utilities") pod "5c7e2613-4511-4820-8f3b-9d48840b780f" (UID: "5c7e2613-4511-4820-8f3b-9d48840b780f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.589099 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg" (OuterVolumeSpecName: "kube-api-access-8gzcg") pod "5c7e2613-4511-4820-8f3b-9d48840b780f" (UID: "5c7e2613-4511-4820-8f3b-9d48840b780f"). InnerVolumeSpecName "kube-api-access-8gzcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.642228 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c7e2613-4511-4820-8f3b-9d48840b780f" (UID: "5c7e2613-4511-4820-8f3b-9d48840b780f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.681695 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gzcg\" (UniqueName: \"kubernetes.io/projected/5c7e2613-4511-4820-8f3b-9d48840b780f-kube-api-access-8gzcg\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.681731 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:17 crc kubenswrapper[4729]: I0127 14:32:17.681740 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7e2613-4511-4820-8f3b-9d48840b780f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.031748 4729 generic.go:334] "Generic (PLEG): container finished" podID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerID="c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2" exitCode=0 Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.031797 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvv5h" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.031797 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerDied","Data":"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2"} Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.031850 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvv5h" event={"ID":"5c7e2613-4511-4820-8f3b-9d48840b780f","Type":"ContainerDied","Data":"2dd0d97540d8950d76aaa6c2773dbfe0c242223f396465382fdbe850626ca06a"} Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.031871 4729 scope.go:117] "RemoveContainer" containerID="c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.051134 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:32:18 crc kubenswrapper[4729]: E0127 14:32:18.051629 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.085078 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.106467 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvv5h"] Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.631402 4729 scope.go:117] "RemoveContainer" containerID="01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.666825 4729 scope.go:117] "RemoveContainer" containerID="c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.778908 4729 scope.go:117] "RemoveContainer" containerID="c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2" Jan 27 14:32:18 crc kubenswrapper[4729]: E0127 14:32:18.779333 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2\": container with ID starting with c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2 not found: ID does not exist" containerID="c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.779371 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2"} err="failed to get container status \"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2\": rpc error: code = NotFound desc = could not find container \"c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2\": container with ID starting with c681c42ccae9664dd90ad0a5995025a35d724b5f6c259d4ee0da13e47d8074d2 not found: ID does not exist" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.779392 4729 scope.go:117] "RemoveContainer" containerID="01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1" Jan 27 14:32:18 crc kubenswrapper[4729]: E0127 14:32:18.779712 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1\": container with ID starting with 01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1 not found: ID does not exist" containerID="01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.779733 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1"} err="failed to get container status \"01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1\": rpc error: code = NotFound desc = could not find container \"01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1\": container with ID starting with 01a119e2a596a28f4e30941c45aa4d3e61670d68fc93bd2c4c7f7c9c12ffc1b1 not found: ID does not exist" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.779744 4729 scope.go:117] "RemoveContainer" containerID="c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b" Jan 27 14:32:18 crc kubenswrapper[4729]: E0127 14:32:18.780764 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b\": container with ID starting with c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b not found: ID does not exist" containerID="c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b" Jan 27 14:32:18 crc kubenswrapper[4729]: I0127 14:32:18.780813 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b"} err="failed to get container status \"c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b\": rpc error: code = NotFound desc = could not find container \"c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b\": container with ID starting with c8f11b8b35cd2782d69f85994d28b385668b988af1a244a89192f1061ef4ba5b not found: ID does not exist" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.074209 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" path="/var/lib/kubelet/pods/5c7e2613-4511-4820-8f3b-9d48840b780f/volumes" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.090279 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"cc700f3206dd15533b8c774e08e7e1346088eac1f0ef80948afd5cdf76719f32"} Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.090339 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"66f8ae45b6b0639f6f440dcb4ed751c2c78b215dee6402ea3fa9dea859d3a072"} Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.090357 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"7ffcb10fde42f866905ea39b73b4600978de122f4c6ab994228d6849afdd2be1"} Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.185931 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.615208 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1cc2-account-create-update-7k7vv"] Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.615945 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="extract-utilities" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.615963 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="extract-utilities" Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.615973 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="extract-content" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.615980 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="extract-content" Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.615993 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="extract-content" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616000 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="extract-content" Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.616032 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616039 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.616054 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616059 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: E0127 14:32:20.616073 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="extract-utilities" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616079 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="extract-utilities" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616250 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7e2613-4511-4820-8f3b-9d48840b780f" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.616260 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccd370a7-8144-4880-8412-c55559238c41" containerName="registry-server" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.617071 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.621767 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.628646 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-42bph"] Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.631898 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.641445 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsh9f\" (UniqueName: \"kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.641508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.641542 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvbl\" (UniqueName: \"kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.641828 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.727290 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-42bph"] Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.744843 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsh9f\" (UniqueName: \"kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.744935 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.744972 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvbl\" (UniqueName: \"kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.745009 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.745803 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.746055 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.788112 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1cc2-account-create-update-7k7vv"] Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.805688 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsh9f\" (UniqueName: \"kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f\") pod \"barbican-db-create-42bph\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " pod="openstack/barbican-db-create-42bph" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.811104 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvbl\" (UniqueName: \"kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl\") pod \"barbican-1cc2-account-create-update-7k7vv\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.912202 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-tsf6d"] Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.913764 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.929147 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tsf6d"] Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.936285 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.949107 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4ll\" (UniqueName: \"kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.949176 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:20 crc kubenswrapper[4729]: I0127 14:32:20.955454 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-42bph" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.051101 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4ll\" (UniqueName: \"kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.051165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.052000 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.063279 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8a3f-account-create-update-dqmrt"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.072643 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4ll\" (UniqueName: \"kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll\") pod \"heat-db-create-tsf6d\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.073744 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.078895 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.131966 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8a3f-account-create-update-dqmrt"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.163110 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lqmdc"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.164705 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.190361 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kgdx4"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.191768 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.225674 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sqzc6" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.226295 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.226455 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.226626 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.247249 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"399f6c9f-a3d5-4235-bce9-f3623e6be7f4","Type":"ContainerStarted","Data":"ea9008e9a1fa9fba38804bb16d02de2da6073171e452f414a239fd00fcc55a8d"} Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.254823 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lqmdc"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.257331 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f28b\" (UniqueName: \"kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.257409 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.284243 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.353974 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kgdx4"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.364756 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896h7\" (UniqueName: \"kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.364868 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.380325 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.380387 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f28b\" (UniqueName: \"kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.380439 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.380484 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.380643 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.389146 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.466010 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f28b\" (UniqueName: \"kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b\") pod \"cinder-8a3f-account-create-update-dqmrt\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.479955 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-7565-account-create-update-n45zb"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.481706 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.482761 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.482966 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.483027 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.483114 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.483157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-896h7\" (UniqueName: \"kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.484420 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.490015 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.494950 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7565-account-create-update-n45zb"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.496364 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.499020 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.524330 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp\") pod \"cinder-db-create-lqmdc\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.524977 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-896h7\" (UniqueName: \"kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7\") pod \"keystone-db-sync-kgdx4\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.528577 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.551799 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.379778215 podStartE2EDuration="1m37.551775851s" podCreationTimestamp="2026-01-27 14:30:44 +0000 UTC" firstStartedPulling="2026-01-27 14:31:25.461340453 +0000 UTC m=+1572.045531457" lastFinishedPulling="2026-01-27 14:32:18.633338089 +0000 UTC m=+1625.217529093" observedRunningTime="2026-01-27 14:32:21.438631493 +0000 UTC m=+1628.022822527" watchObservedRunningTime="2026-01-27 14:32:21.551775851 +0000 UTC m=+1628.135966855" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.586509 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.587467 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.587677 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56g4\" (UniqueName: \"kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.614177 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lrspj"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.615976 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.616592 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.654708 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lrspj"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.690483 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.690571 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsv47\" (UniqueName: \"kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.690802 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.690869 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j56g4\" (UniqueName: \"kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.692358 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.752923 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-55c3-account-create-update-b2gdg"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.754718 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.757735 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j56g4\" (UniqueName: \"kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4\") pod \"heat-7565-account-create-update-n45zb\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.759280 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.764615 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55c3-account-create-update-b2gdg"] Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.794185 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.794242 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.794309 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsv47\" (UniqueName: \"kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.794346 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdwz\" (UniqueName: \"kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.795043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.824751 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.837326 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsv47\" (UniqueName: \"kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47\") pod \"neutron-db-create-lrspj\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.901585 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdwz\" (UniqueName: \"kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.902534 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.903434 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.949425 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdwz\" (UniqueName: \"kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz\") pod \"neutron-55c3-account-create-update-b2gdg\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:21 crc kubenswrapper[4729]: I0127 14:32:21.962784 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.000620 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.004053 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.007155 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.023216 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.115801 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xdw\" (UniqueName: \"kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.116029 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.116071 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.116109 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.116169 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.116198 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.117478 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1cc2-account-create-update-7k7vv"] Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.128433 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.221402 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.221468 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.221512 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.221583 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.222541 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.222629 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.223156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.223464 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.223687 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.223739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.224105 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xdw\" (UniqueName: \"kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: W0127 14:32:22.246999 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257aa9e6_20b3_41bf_b4e7_d2f60bd884a7.slice/crio-39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886 WatchSource:0}: Error finding container 39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886: Status 404 returned error can't find the container with id 39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886 Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.259354 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-42bph"] Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.267747 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xdw\" (UniqueName: \"kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw\") pod \"dnsmasq-dns-5c79d794d7-lgdfh\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.295493 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1cc2-account-create-update-7k7vv" event={"ID":"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7","Type":"ContainerStarted","Data":"39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886"} Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.491417 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.551943 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-tsf6d"] Jan 27 14:32:22 crc kubenswrapper[4729]: W0127 14:32:22.562652 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec91797f_3812_4509_b9c1_dbc72bb4576c.slice/crio-fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537 WatchSource:0}: Error finding container fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537: Status 404 returned error can't find the container with id fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537 Jan 27 14:32:22 crc kubenswrapper[4729]: E0127 14:32:22.716641 4729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.171:55626->38.129.56.171:42429: write tcp 38.129.56.171:55626->38.129.56.171:42429: write: broken pipe Jan 27 14:32:22 crc kubenswrapper[4729]: I0127 14:32:22.857008 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8a3f-account-create-update-dqmrt"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.162757 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lqmdc"] Jan 27 14:32:23 crc kubenswrapper[4729]: W0127 14:32:23.226721 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35b9d6e_1e57_4e7d_812c_8e662e652759.slice/crio-70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7 WatchSource:0}: Error finding container 70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7: Status 404 returned error can't find the container with id 70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7 Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.232586 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lrspj"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.273175 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kgdx4"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.325264 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-7565-account-create-update-n45zb"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.341978 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-55c3-account-create-update-b2gdg"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.346508 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lrspj" event={"ID":"db170cee-4747-486f-9e1c-ed91b358127c","Type":"ContainerStarted","Data":"cff22f6f59ac20fc2b3f3b7d84c532554a19ca3382e5915398755a5edf7962ef"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.348039 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c3-account-create-update-b2gdg" event={"ID":"d44c6dc1-a783-4680-aa97-c68f4c2a435e","Type":"ContainerStarted","Data":"8e1efe1283e0a1d9878261c3e8caba7bd72c21f711b7679e8745fa23ea164210"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.349689 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lqmdc" event={"ID":"a9cb86fa-4640-4a25-a97e-0212029e2d54","Type":"ContainerStarted","Data":"5c06610ca6acb8889baebf301a489c30a41f154a528d53983515b55333b1c9e5"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.353338 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kgdx4" event={"ID":"f35b9d6e-1e57-4e7d-812c-8e662e652759","Type":"ContainerStarted","Data":"70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.355717 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a3f-account-create-update-dqmrt" event={"ID":"f824a03c-8320-4c09-83ab-7bf997460ad5","Type":"ContainerStarted","Data":"11cbfeba36e2b3ae1ac65c78e0c35c7f7f1b447e37625e8d2a7d7e6665775f07"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.355747 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a3f-account-create-update-dqmrt" event={"ID":"f824a03c-8320-4c09-83ab-7bf997460ad5","Type":"ContainerStarted","Data":"7af9db04fe9bb6ff11b295ff206aacb7944613f0b0b4a753b07631d530c605b8"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.373291 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7565-account-create-update-n45zb" event={"ID":"c59aa2e6-ef2f-459c-9db7-0765405fe2e7","Type":"ContainerStarted","Data":"ff3ca81f44c7c5ae2d596b56763231872a2c4cc3fb739a6ade6f80fd884636b6"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.383246 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tsf6d" event={"ID":"ec91797f-3812-4509-b9c1-dbc72bb4576c","Type":"ContainerStarted","Data":"113c99986a264b5e8c71e34ddc0c4ca8f433e3e6850fb3288188b1faadc95ad6"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.383297 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tsf6d" event={"ID":"ec91797f-3812-4509-b9c1-dbc72bb4576c","Type":"ContainerStarted","Data":"fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.387243 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8a3f-account-create-update-dqmrt" podStartSLOduration=2.38722276 podStartE2EDuration="2.38722276s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:23.373571282 +0000 UTC m=+1629.957762286" watchObservedRunningTime="2026-01-27 14:32:23.38722276 +0000 UTC m=+1629.971413774" Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.402376 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-42bph" event={"ID":"1a98b729-f749-4536-a5eb-671c63734bcd","Type":"ContainerStarted","Data":"a9dd6c52c2f95e7befbd51526aac2fd038f939db117258dbeecde1c3ab9990d2"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.402440 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-42bph" event={"ID":"1a98b729-f749-4536-a5eb-671c63734bcd","Type":"ContainerStarted","Data":"c3c948a56cf7b88aecaa26d35f6948bfbace99f45b0cfa5979176348a6a3b9e3"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.405478 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-tsf6d" podStartSLOduration=3.405463474 podStartE2EDuration="3.405463474s" podCreationTimestamp="2026-01-27 14:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:23.395505369 +0000 UTC m=+1629.979696373" watchObservedRunningTime="2026-01-27 14:32:23.405463474 +0000 UTC m=+1629.989654478" Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.424075 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-42bph" podStartSLOduration=3.424055558 podStartE2EDuration="3.424055558s" podCreationTimestamp="2026-01-27 14:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:23.419308387 +0000 UTC m=+1630.003499391" watchObservedRunningTime="2026-01-27 14:32:23.424055558 +0000 UTC m=+1630.008246562" Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.440285 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1cc2-account-create-update-7k7vv" event={"ID":"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7","Type":"ContainerStarted","Data":"b5f26b984487f9d2eb089ff7da8d8e7f0ef34ac85b2c821249ae26b6279e89f3"} Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.467729 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:32:23 crc kubenswrapper[4729]: I0127 14:32:23.467976 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-1cc2-account-create-update-7k7vv" podStartSLOduration=3.467953472 podStartE2EDuration="3.467953472s" podCreationTimestamp="2026-01-27 14:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:23.461442212 +0000 UTC m=+1630.045633216" watchObservedRunningTime="2026-01-27 14:32:23.467953472 +0000 UTC m=+1630.052144476" Jan 27 14:32:23 crc kubenswrapper[4729]: W0127 14:32:23.485073 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e66cbe5_29e9_407c_be54_e6dc2b4e84bb.slice/crio-fcd1f3ca774c109dc8fe0b1335a86e5b5c8834c92479f2772cf70ca1fff81002 WatchSource:0}: Error finding container fcd1f3ca774c109dc8fe0b1335a86e5b5c8834c92479f2772cf70ca1fff81002: Status 404 returned error can't find the container with id fcd1f3ca774c109dc8fe0b1335a86e5b5c8834c92479f2772cf70ca1fff81002 Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.521320 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lrspj" event={"ID":"db170cee-4747-486f-9e1c-ed91b358127c","Type":"ContainerStarted","Data":"8f0c0bbf4e9b73ceb56c61f98c497ddd247df45472d5740c9f12e1a1843c916b"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.530825 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c3-account-create-update-b2gdg" event={"ID":"d44c6dc1-a783-4680-aa97-c68f4c2a435e","Type":"ContainerStarted","Data":"73abc768f98887d6be8f8b3063413ecd1af962851f5d41e6c2b2d92e0e01dcf2"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.534328 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lqmdc" event={"ID":"a9cb86fa-4640-4a25-a97e-0212029e2d54","Type":"ContainerStarted","Data":"c67e543d704ed34a37a0c05b3dc7747a191cb4be108360a325ce61feaac4c636"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.551194 4729 generic.go:334] "Generic (PLEG): container finished" podID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerID="433038ebea6b71f9cbacb6c00e2262bb9629117dc95b3f310cf5190eea02e9d0" exitCode=0 Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.551321 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" event={"ID":"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb","Type":"ContainerDied","Data":"433038ebea6b71f9cbacb6c00e2262bb9629117dc95b3f310cf5190eea02e9d0"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.551357 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" event={"ID":"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb","Type":"ContainerStarted","Data":"fcd1f3ca774c109dc8fe0b1335a86e5b5c8834c92479f2772cf70ca1fff81002"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.554857 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7565-account-create-update-n45zb" event={"ID":"c59aa2e6-ef2f-459c-9db7-0765405fe2e7","Type":"ContainerStarted","Data":"4784a7689432186af9cd924a663aa730686434b7f0abd5d7f2d1fdb0124aebcb"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.557533 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lrspj" podStartSLOduration=3.5575049869999997 podStartE2EDuration="3.557504987s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:24.544041245 +0000 UTC m=+1631.128232269" watchObservedRunningTime="2026-01-27 14:32:24.557504987 +0000 UTC m=+1631.141696001" Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.567408 4729 generic.go:334] "Generic (PLEG): container finished" podID="1a98b729-f749-4536-a5eb-671c63734bcd" containerID="a9dd6c52c2f95e7befbd51526aac2fd038f939db117258dbeecde1c3ab9990d2" exitCode=0 Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.570093 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-42bph" event={"ID":"1a98b729-f749-4536-a5eb-671c63734bcd","Type":"ContainerDied","Data":"a9dd6c52c2f95e7befbd51526aac2fd038f939db117258dbeecde1c3ab9990d2"} Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.574861 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-lqmdc" podStartSLOduration=3.574839046 podStartE2EDuration="3.574839046s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:24.561164318 +0000 UTC m=+1631.145355342" watchObservedRunningTime="2026-01-27 14:32:24.574839046 +0000 UTC m=+1631.159030070" Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.613776 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-55c3-account-create-update-b2gdg" podStartSLOduration=3.613754453 podStartE2EDuration="3.613754453s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:24.59451023 +0000 UTC m=+1631.178701264" watchObservedRunningTime="2026-01-27 14:32:24.613754453 +0000 UTC m=+1631.197945467" Jan 27 14:32:24 crc kubenswrapper[4729]: I0127 14:32:24.627840 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-7565-account-create-update-n45zb" podStartSLOduration=3.627816561 podStartE2EDuration="3.627816561s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:24.610552184 +0000 UTC m=+1631.194743208" watchObservedRunningTime="2026-01-27 14:32:24.627816561 +0000 UTC m=+1631.212007565" Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.580379 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" event={"ID":"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb","Type":"ContainerStarted","Data":"fcfd4fd5cb2a6f9c9c691c9c41b43a38f61d2ad2553366a98efd53f1423b27b7"} Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.580793 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.582990 4729 generic.go:334] "Generic (PLEG): container finished" podID="ec91797f-3812-4509-b9c1-dbc72bb4576c" containerID="113c99986a264b5e8c71e34ddc0c4ca8f433e3e6850fb3288188b1faadc95ad6" exitCode=0 Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.584611 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tsf6d" event={"ID":"ec91797f-3812-4509-b9c1-dbc72bb4576c","Type":"ContainerDied","Data":"113c99986a264b5e8c71e34ddc0c4ca8f433e3e6850fb3288188b1faadc95ad6"} Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.617784 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podStartSLOduration=4.617754892 podStartE2EDuration="4.617754892s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:32:25.613061343 +0000 UTC m=+1632.197252367" watchObservedRunningTime="2026-01-27 14:32:25.617754892 +0000 UTC m=+1632.201945906" Jan 27 14:32:25 crc kubenswrapper[4729]: I0127 14:32:25.980030 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-42bph" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.054841 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsh9f\" (UniqueName: \"kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f\") pod \"1a98b729-f749-4536-a5eb-671c63734bcd\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.056162 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts\") pod \"1a98b729-f749-4536-a5eb-671c63734bcd\" (UID: \"1a98b729-f749-4536-a5eb-671c63734bcd\") " Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.056952 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a98b729-f749-4536-a5eb-671c63734bcd" (UID: "1a98b729-f749-4536-a5eb-671c63734bcd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.057345 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a98b729-f749-4536-a5eb-671c63734bcd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.060683 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f" (OuterVolumeSpecName: "kube-api-access-hsh9f") pod "1a98b729-f749-4536-a5eb-671c63734bcd" (UID: "1a98b729-f749-4536-a5eb-671c63734bcd"). InnerVolumeSpecName "kube-api-access-hsh9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.160144 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsh9f\" (UniqueName: \"kubernetes.io/projected/1a98b729-f749-4536-a5eb-671c63734bcd-kube-api-access-hsh9f\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.597492 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-42bph" event={"ID":"1a98b729-f749-4536-a5eb-671c63734bcd","Type":"ContainerDied","Data":"c3c948a56cf7b88aecaa26d35f6948bfbace99f45b0cfa5979176348a6a3b9e3"} Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.597538 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3c948a56cf7b88aecaa26d35f6948bfbace99f45b0cfa5979176348a6a3b9e3" Jan 27 14:32:26 crc kubenswrapper[4729]: I0127 14:32:26.597677 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-42bph" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.636830 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:27 crc kubenswrapper[4729]: E0127 14:32:27.638457 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a98b729-f749-4536-a5eb-671c63734bcd" containerName="mariadb-database-create" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.638483 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a98b729-f749-4536-a5eb-671c63734bcd" containerName="mariadb-database-create" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.638761 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a98b729-f749-4536-a5eb-671c63734bcd" containerName="mariadb-database-create" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.641599 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.656701 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.695350 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxd4\" (UniqueName: \"kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.695599 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.695791 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.797799 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.797963 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.798009 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxd4\" (UniqueName: \"kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.798524 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.798438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.827929 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxd4\" (UniqueName: \"kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4\") pod \"redhat-marketplace-56t4x\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:27 crc kubenswrapper[4729]: I0127 14:32:27.961068 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.624219 4729 generic.go:334] "Generic (PLEG): container finished" podID="257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" containerID="b5f26b984487f9d2eb089ff7da8d8e7f0ef34ac85b2c821249ae26b6279e89f3" exitCode=0 Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.624323 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1cc2-account-create-update-7k7vv" event={"ID":"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7","Type":"ContainerDied","Data":"b5f26b984487f9d2eb089ff7da8d8e7f0ef34ac85b2c821249ae26b6279e89f3"} Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.627555 4729 generic.go:334] "Generic (PLEG): container finished" podID="db170cee-4747-486f-9e1c-ed91b358127c" containerID="8f0c0bbf4e9b73ceb56c61f98c497ddd247df45472d5740c9f12e1a1843c916b" exitCode=0 Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.627654 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lrspj" event={"ID":"db170cee-4747-486f-9e1c-ed91b358127c","Type":"ContainerDied","Data":"8f0c0bbf4e9b73ceb56c61f98c497ddd247df45472d5740c9f12e1a1843c916b"} Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.629369 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9cb86fa-4640-4a25-a97e-0212029e2d54" containerID="c67e543d704ed34a37a0c05b3dc7747a191cb4be108360a325ce61feaac4c636" exitCode=0 Jan 27 14:32:28 crc kubenswrapper[4729]: I0127 14:32:28.629395 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lqmdc" event={"ID":"a9cb86fa-4640-4a25-a97e-0212029e2d54","Type":"ContainerDied","Data":"c67e543d704ed34a37a0c05b3dc7747a191cb4be108360a325ce61feaac4c636"} Jan 27 14:32:29 crc kubenswrapper[4729]: I0127 14:32:29.642100 4729 generic.go:334] "Generic (PLEG): container finished" podID="f824a03c-8320-4c09-83ab-7bf997460ad5" containerID="11cbfeba36e2b3ae1ac65c78e0c35c7f7f1b447e37625e8d2a7d7e6665775f07" exitCode=0 Jan 27 14:32:29 crc kubenswrapper[4729]: I0127 14:32:29.642212 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a3f-account-create-update-dqmrt" event={"ID":"f824a03c-8320-4c09-83ab-7bf997460ad5","Type":"ContainerDied","Data":"11cbfeba36e2b3ae1ac65c78e0c35c7f7f1b447e37625e8d2a7d7e6665775f07"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.482180 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.491463 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.504838 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.509610 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.571962 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts\") pod \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572061 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp\") pod \"a9cb86fa-4640-4a25-a97e-0212029e2d54\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572137 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw4ll\" (UniqueName: \"kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll\") pod \"ec91797f-3812-4509-b9c1-dbc72bb4576c\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572168 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dvbl\" (UniqueName: \"kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl\") pod \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\" (UID: \"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572203 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsv47\" (UniqueName: \"kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47\") pod \"db170cee-4747-486f-9e1c-ed91b358127c\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572250 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts\") pod \"db170cee-4747-486f-9e1c-ed91b358127c\" (UID: \"db170cee-4747-486f-9e1c-ed91b358127c\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572378 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts\") pod \"a9cb86fa-4640-4a25-a97e-0212029e2d54\" (UID: \"a9cb86fa-4640-4a25-a97e-0212029e2d54\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.572479 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts\") pod \"ec91797f-3812-4509-b9c1-dbc72bb4576c\" (UID: \"ec91797f-3812-4509-b9c1-dbc72bb4576c\") " Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.573379 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db170cee-4747-486f-9e1c-ed91b358127c" (UID: "db170cee-4747-486f-9e1c-ed91b358127c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.573430 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9cb86fa-4640-4a25-a97e-0212029e2d54" (UID: "a9cb86fa-4640-4a25-a97e-0212029e2d54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.573500 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" (UID: "257aa9e6-20b3-41bf-b4e7-d2f60bd884a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.573507 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec91797f-3812-4509-b9c1-dbc72bb4576c" (UID: "ec91797f-3812-4509-b9c1-dbc72bb4576c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.576665 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll" (OuterVolumeSpecName: "kube-api-access-vw4ll") pod "ec91797f-3812-4509-b9c1-dbc72bb4576c" (UID: "ec91797f-3812-4509-b9c1-dbc72bb4576c"). InnerVolumeSpecName "kube-api-access-vw4ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.577077 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47" (OuterVolumeSpecName: "kube-api-access-qsv47") pod "db170cee-4747-486f-9e1c-ed91b358127c" (UID: "db170cee-4747-486f-9e1c-ed91b358127c"). InnerVolumeSpecName "kube-api-access-qsv47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.593150 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl" (OuterVolumeSpecName: "kube-api-access-7dvbl") pod "257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" (UID: "257aa9e6-20b3-41bf-b4e7-d2f60bd884a7"). InnerVolumeSpecName "kube-api-access-7dvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.593723 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp" (OuterVolumeSpecName: "kube-api-access-ddkmp") pod "a9cb86fa-4640-4a25-a97e-0212029e2d54" (UID: "a9cb86fa-4640-4a25-a97e-0212029e2d54"). InnerVolumeSpecName "kube-api-access-ddkmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.662974 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lqmdc" event={"ID":"a9cb86fa-4640-4a25-a97e-0212029e2d54","Type":"ContainerDied","Data":"5c06610ca6acb8889baebf301a489c30a41f154a528d53983515b55333b1c9e5"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.663019 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c06610ca6acb8889baebf301a489c30a41f154a528d53983515b55333b1c9e5" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.663102 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lqmdc" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.684529 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsv47\" (UniqueName: \"kubernetes.io/projected/db170cee-4747-486f-9e1c-ed91b358127c-kube-api-access-qsv47\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.684977 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db170cee-4747-486f-9e1c-ed91b358127c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.684995 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cb86fa-4640-4a25-a97e-0212029e2d54-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685007 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec91797f-3812-4509-b9c1-dbc72bb4576c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685021 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685033 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddkmp\" (UniqueName: \"kubernetes.io/projected/a9cb86fa-4640-4a25-a97e-0212029e2d54-kube-api-access-ddkmp\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685049 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw4ll\" (UniqueName: \"kubernetes.io/projected/ec91797f-3812-4509-b9c1-dbc72bb4576c-kube-api-access-vw4ll\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685062 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dvbl\" (UniqueName: \"kubernetes.io/projected/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7-kube-api-access-7dvbl\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685299 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1cc2-account-create-update-7k7vv" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685741 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1cc2-account-create-update-7k7vv" event={"ID":"257aa9e6-20b3-41bf-b4e7-d2f60bd884a7","Type":"ContainerDied","Data":"39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.685803 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a2f680da14ce895d66d13f05123ac0cc49c116bc4dde8a0643ac451ee3b886" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.693464 4729 generic.go:334] "Generic (PLEG): container finished" podID="c59aa2e6-ef2f-459c-9db7-0765405fe2e7" containerID="4784a7689432186af9cd924a663aa730686434b7f0abd5d7f2d1fdb0124aebcb" exitCode=0 Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.693546 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7565-account-create-update-n45zb" event={"ID":"c59aa2e6-ef2f-459c-9db7-0765405fe2e7","Type":"ContainerDied","Data":"4784a7689432186af9cd924a663aa730686434b7f0abd5d7f2d1fdb0124aebcb"} Jan 27 14:32:30 crc kubenswrapper[4729]: W0127 14:32:30.695431 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18b97c1_8f9b_411e_ad64_a8aa04c88e0e.slice/crio-cda95c6b17d3b2bbfec06e8666fca6da69db1c22b45e0109fcd7e987f5614450 WatchSource:0}: Error finding container cda95c6b17d3b2bbfec06e8666fca6da69db1c22b45e0109fcd7e987f5614450: Status 404 returned error can't find the container with id cda95c6b17d3b2bbfec06e8666fca6da69db1c22b45e0109fcd7e987f5614450 Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.696306 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-tsf6d" event={"ID":"ec91797f-3812-4509-b9c1-dbc72bb4576c","Type":"ContainerDied","Data":"fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.696334 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedaca9cd4fb2e549cfdc03b879cf803d0b49e697c70a2b1192aa096c9129537" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.696385 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-tsf6d" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.701806 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lrspj" event={"ID":"db170cee-4747-486f-9e1c-ed91b358127c","Type":"ContainerDied","Data":"cff22f6f59ac20fc2b3f3b7d84c532554a19ca3382e5915398755a5edf7962ef"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.702000 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cff22f6f59ac20fc2b3f3b7d84c532554a19ca3382e5915398755a5edf7962ef" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.702153 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lrspj" Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.707395 4729 generic.go:334] "Generic (PLEG): container finished" podID="d44c6dc1-a783-4680-aa97-c68f4c2a435e" containerID="73abc768f98887d6be8f8b3063413ecd1af962851f5d41e6c2b2d92e0e01dcf2" exitCode=0 Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.707487 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c3-account-create-update-b2gdg" event={"ID":"d44c6dc1-a783-4680-aa97-c68f4c2a435e","Type":"ContainerDied","Data":"73abc768f98887d6be8f8b3063413ecd1af962851f5d41e6c2b2d92e0e01dcf2"} Jan 27 14:32:30 crc kubenswrapper[4729]: I0127 14:32:30.734859 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.019800 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.104450 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts\") pod \"f824a03c-8320-4c09-83ab-7bf997460ad5\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.104516 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f28b\" (UniqueName: \"kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b\") pod \"f824a03c-8320-4c09-83ab-7bf997460ad5\" (UID: \"f824a03c-8320-4c09-83ab-7bf997460ad5\") " Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.105323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f824a03c-8320-4c09-83ab-7bf997460ad5" (UID: "f824a03c-8320-4c09-83ab-7bf997460ad5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.110430 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b" (OuterVolumeSpecName: "kube-api-access-9f28b") pod "f824a03c-8320-4c09-83ab-7bf997460ad5" (UID: "f824a03c-8320-4c09-83ab-7bf997460ad5"). InnerVolumeSpecName "kube-api-access-9f28b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.207548 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f824a03c-8320-4c09-83ab-7bf997460ad5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.208121 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f28b\" (UniqueName: \"kubernetes.io/projected/f824a03c-8320-4c09-83ab-7bf997460ad5-kube-api-access-9f28b\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.725939 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8a3f-account-create-update-dqmrt" event={"ID":"f824a03c-8320-4c09-83ab-7bf997460ad5","Type":"ContainerDied","Data":"7af9db04fe9bb6ff11b295ff206aacb7944613f0b0b4a753b07631d530c605b8"} Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.725994 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af9db04fe9bb6ff11b295ff206aacb7944613f0b0b4a753b07631d530c605b8" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.726060 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8a3f-account-create-update-dqmrt" Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.731096 4729 generic.go:334] "Generic (PLEG): container finished" podID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerID="11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c" exitCode=0 Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.731160 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerDied","Data":"11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c"} Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.731218 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerStarted","Data":"cda95c6b17d3b2bbfec06e8666fca6da69db1c22b45e0109fcd7e987f5614450"} Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.734271 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kgdx4" event={"ID":"f35b9d6e-1e57-4e7d-812c-8e662e652759","Type":"ContainerStarted","Data":"0f3cbb428e3b53f2e99adaf0648d89ac6ae573b0b428ce0c66abf241ff0c574b"} Jan 27 14:32:31 crc kubenswrapper[4729]: I0127 14:32:31.778310 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kgdx4" podStartSLOduration=3.545485646 podStartE2EDuration="10.778293337s" podCreationTimestamp="2026-01-27 14:32:21 +0000 UTC" firstStartedPulling="2026-01-27 14:32:23.272523909 +0000 UTC m=+1629.856714913" lastFinishedPulling="2026-01-27 14:32:30.5053316 +0000 UTC m=+1637.089522604" observedRunningTime="2026-01-27 14:32:31.776310432 +0000 UTC m=+1638.360501436" watchObservedRunningTime="2026-01-27 14:32:31.778293337 +0000 UTC m=+1638.362484331" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.068468 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:32:32 crc kubenswrapper[4729]: E0127 14:32:32.070026 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.239561 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.337933 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j56g4\" (UniqueName: \"kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4\") pod \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.338533 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts\") pod \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\" (UID: \"c59aa2e6-ef2f-459c-9db7-0765405fe2e7\") " Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.339808 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c59aa2e6-ef2f-459c-9db7-0765405fe2e7" (UID: "c59aa2e6-ef2f-459c-9db7-0765405fe2e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.346782 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4" (OuterVolumeSpecName: "kube-api-access-j56g4") pod "c59aa2e6-ef2f-459c-9db7-0765405fe2e7" (UID: "c59aa2e6-ef2f-459c-9db7-0765405fe2e7"). InnerVolumeSpecName "kube-api-access-j56g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.410387 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.440890 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts\") pod \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.441246 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdwz\" (UniqueName: \"kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz\") pod \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\" (UID: \"d44c6dc1-a783-4680-aa97-c68f4c2a435e\") " Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.441833 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j56g4\" (UniqueName: \"kubernetes.io/projected/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-kube-api-access-j56g4\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.441858 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c59aa2e6-ef2f-459c-9db7-0765405fe2e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.443209 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d44c6dc1-a783-4680-aa97-c68f4c2a435e" (UID: "d44c6dc1-a783-4680-aa97-c68f4c2a435e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.445166 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz" (OuterVolumeSpecName: "kube-api-access-vxdwz") pod "d44c6dc1-a783-4680-aa97-c68f4c2a435e" (UID: "d44c6dc1-a783-4680-aa97-c68f4c2a435e"). InnerVolumeSpecName "kube-api-access-vxdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.492780 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.544553 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdwz\" (UniqueName: \"kubernetes.io/projected/d44c6dc1-a783-4680-aa97-c68f4c2a435e-kube-api-access-vxdwz\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.544589 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d44c6dc1-a783-4680-aa97-c68f4c2a435e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.557630 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.557936 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="dnsmasq-dns" containerID="cri-o://d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387" gracePeriod=10 Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.745000 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-55c3-account-create-update-b2gdg" event={"ID":"d44c6dc1-a783-4680-aa97-c68f4c2a435e","Type":"ContainerDied","Data":"8e1efe1283e0a1d9878261c3e8caba7bd72c21f711b7679e8745fa23ea164210"} Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.745868 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1efe1283e0a1d9878261c3e8caba7bd72c21f711b7679e8745fa23ea164210" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.746006 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-55c3-account-create-update-b2gdg" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.756213 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-7565-account-create-update-n45zb" event={"ID":"c59aa2e6-ef2f-459c-9db7-0765405fe2e7","Type":"ContainerDied","Data":"ff3ca81f44c7c5ae2d596b56763231872a2c4cc3fb739a6ade6f80fd884636b6"} Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.756529 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff3ca81f44c7c5ae2d596b56763231872a2c4cc3fb739a6ade6f80fd884636b6" Jan 27 14:32:32 crc kubenswrapper[4729]: I0127 14:32:32.756361 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-7565-account-create-update-n45zb" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.612969 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.671596 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config\") pod \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.671736 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb\") pod \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.671818 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb\") pod \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.671911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc\") pod \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.671943 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9sm\" (UniqueName: \"kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm\") pod \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\" (UID: \"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3\") " Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.693160 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm" (OuterVolumeSpecName: "kube-api-access-6b9sm") pod "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" (UID: "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3"). InnerVolumeSpecName "kube-api-access-6b9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.728894 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" (UID: "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.732898 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config" (OuterVolumeSpecName: "config") pod "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" (UID: "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.743319 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" (UID: "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.764473 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" (UID: "dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.768532 4729 generic.go:334] "Generic (PLEG): container finished" podID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerID="dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894" exitCode=0 Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.768618 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerDied","Data":"dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894"} Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.771962 4729 generic.go:334] "Generic (PLEG): container finished" podID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerID="d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387" exitCode=0 Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.772007 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" event={"ID":"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3","Type":"ContainerDied","Data":"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387"} Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.772036 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" event={"ID":"dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3","Type":"ContainerDied","Data":"4cac18262c773354b2f75ebfc67a66a86883aa2e962e8b5b18a7911113fe5b51"} Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.772054 4729 scope.go:117] "RemoveContainer" containerID="d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.772196 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-4f2nb" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.774043 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.774073 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9sm\" (UniqueName: \"kubernetes.io/projected/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-kube-api-access-6b9sm\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.774087 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.774099 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.774110 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.865112 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.869866 4729 scope.go:117] "RemoveContainer" containerID="5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.879971 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-4f2nb"] Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.896380 4729 scope.go:117] "RemoveContainer" containerID="d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387" Jan 27 14:32:33 crc kubenswrapper[4729]: E0127 14:32:33.896759 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387\": container with ID starting with d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387 not found: ID does not exist" containerID="d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.896790 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387"} err="failed to get container status \"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387\": rpc error: code = NotFound desc = could not find container \"d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387\": container with ID starting with d1b1d1f8c1060e7ebc48d6140706f17c755cc74b75272beff6afa9ce6c76e387 not found: ID does not exist" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.896813 4729 scope.go:117] "RemoveContainer" containerID="5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0" Jan 27 14:32:33 crc kubenswrapper[4729]: E0127 14:32:33.897127 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0\": container with ID starting with 5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0 not found: ID does not exist" containerID="5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0" Jan 27 14:32:33 crc kubenswrapper[4729]: I0127 14:32:33.897145 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0"} err="failed to get container status \"5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0\": rpc error: code = NotFound desc = could not find container \"5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0\": container with ID starting with 5bf71a91568933591f1acbe3645c96c0bdf98bf863ff953312e8b01bd6a43dd0 not found: ID does not exist" Jan 27 14:32:34 crc kubenswrapper[4729]: I0127 14:32:34.063071 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" path="/var/lib/kubelet/pods/dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3/volumes" Jan 27 14:32:34 crc kubenswrapper[4729]: I0127 14:32:34.783542 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerStarted","Data":"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3"} Jan 27 14:32:34 crc kubenswrapper[4729]: I0127 14:32:34.804229 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56t4x" podStartSLOduration=4.9814319529999995 podStartE2EDuration="7.804212781s" podCreationTimestamp="2026-01-27 14:32:27 +0000 UTC" firstStartedPulling="2026-01-27 14:32:31.733838957 +0000 UTC m=+1638.318029961" lastFinishedPulling="2026-01-27 14:32:34.556619785 +0000 UTC m=+1641.140810789" observedRunningTime="2026-01-27 14:32:34.7994816 +0000 UTC m=+1641.383672604" watchObservedRunningTime="2026-01-27 14:32:34.804212781 +0000 UTC m=+1641.388403785" Jan 27 14:32:37 crc kubenswrapper[4729]: I0127 14:32:37.961493 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:37 crc kubenswrapper[4729]: I0127 14:32:37.962003 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:38 crc kubenswrapper[4729]: I0127 14:32:38.019961 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:45 crc kubenswrapper[4729]: I0127 14:32:45.051519 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:32:45 crc kubenswrapper[4729]: E0127 14:32:45.052330 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:32:48 crc kubenswrapper[4729]: I0127 14:32:48.014385 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:48 crc kubenswrapper[4729]: I0127 14:32:48.079918 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:48 crc kubenswrapper[4729]: I0127 14:32:48.939527 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-56t4x" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="registry-server" containerID="cri-o://c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3" gracePeriod=2 Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.804359 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.933351 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities\") pod \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.933525 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content\") pod \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.933655 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxd4\" (UniqueName: \"kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4\") pod \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\" (UID: \"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e\") " Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.934613 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities" (OuterVolumeSpecName: "utilities") pod "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" (UID: "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.942327 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4" (OuterVolumeSpecName: "kube-api-access-qkxd4") pod "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" (UID: "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e"). InnerVolumeSpecName "kube-api-access-qkxd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.955983 4729 generic.go:334] "Generic (PLEG): container finished" podID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerID="c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3" exitCode=0 Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.956067 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerDied","Data":"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3"} Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.956114 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56t4x" event={"ID":"d18b97c1-8f9b-411e-ad64-a8aa04c88e0e","Type":"ContainerDied","Data":"cda95c6b17d3b2bbfec06e8666fca6da69db1c22b45e0109fcd7e987f5614450"} Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.956140 4729 scope.go:117] "RemoveContainer" containerID="c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3" Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.956070 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56t4x" Jan 27 14:32:49 crc kubenswrapper[4729]: I0127 14:32:49.964998 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" (UID: "d18b97c1-8f9b-411e-ad64-a8aa04c88e0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.015778 4729 scope.go:117] "RemoveContainer" containerID="dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.036566 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.036596 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkxd4\" (UniqueName: \"kubernetes.io/projected/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-kube-api-access-qkxd4\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.036606 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.049173 4729 scope.go:117] "RemoveContainer" containerID="11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.106029 4729 scope.go:117] "RemoveContainer" containerID="c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3" Jan 27 14:32:50 crc kubenswrapper[4729]: E0127 14:32:50.106413 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3\": container with ID starting with c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3 not found: ID does not exist" containerID="c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.106456 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3"} err="failed to get container status \"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3\": rpc error: code = NotFound desc = could not find container \"c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3\": container with ID starting with c9e053c5da567480438ad49a2cf7aa561081dd4e66f5b09596ed23159c70cea3 not found: ID does not exist" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.106479 4729 scope.go:117] "RemoveContainer" containerID="dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894" Jan 27 14:32:50 crc kubenswrapper[4729]: E0127 14:32:50.106788 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894\": container with ID starting with dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894 not found: ID does not exist" containerID="dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.106829 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894"} err="failed to get container status \"dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894\": rpc error: code = NotFound desc = could not find container \"dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894\": container with ID starting with dff010b635786053e3e488406caf57d787fcb5e756a6eb39cefef1e5205fa894 not found: ID does not exist" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.106899 4729 scope.go:117] "RemoveContainer" containerID="11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c" Jan 27 14:32:50 crc kubenswrapper[4729]: E0127 14:32:50.108105 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c\": container with ID starting with 11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c not found: ID does not exist" containerID="11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.108146 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c"} err="failed to get container status \"11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c\": rpc error: code = NotFound desc = could not find container \"11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c\": container with ID starting with 11a6e61597d33dcddad1cbb7f0cf740ffcb0623e314872742e185ea4191c754c not found: ID does not exist" Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.288035 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:50 crc kubenswrapper[4729]: I0127 14:32:50.298030 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-56t4x"] Jan 27 14:32:52 crc kubenswrapper[4729]: I0127 14:32:52.064791 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" path="/var/lib/kubelet/pods/d18b97c1-8f9b-411e-ad64-a8aa04c88e0e/volumes" Jan 27 14:32:57 crc kubenswrapper[4729]: I0127 14:32:57.052006 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:32:57 crc kubenswrapper[4729]: E0127 14:32:57.052966 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:03 crc kubenswrapper[4729]: I0127 14:33:03.088286 4729 generic.go:334] "Generic (PLEG): container finished" podID="f35b9d6e-1e57-4e7d-812c-8e662e652759" containerID="0f3cbb428e3b53f2e99adaf0648d89ac6ae573b0b428ce0c66abf241ff0c574b" exitCode=0 Jan 27 14:33:03 crc kubenswrapper[4729]: I0127 14:33:03.088370 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kgdx4" event={"ID":"f35b9d6e-1e57-4e7d-812c-8e662e652759","Type":"ContainerDied","Data":"0f3cbb428e3b53f2e99adaf0648d89ac6ae573b0b428ce0c66abf241ff0c574b"} Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.495656 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.569022 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data\") pod \"f35b9d6e-1e57-4e7d-812c-8e662e652759\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.569637 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle\") pod \"f35b9d6e-1e57-4e7d-812c-8e662e652759\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.569697 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-896h7\" (UniqueName: \"kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7\") pod \"f35b9d6e-1e57-4e7d-812c-8e662e652759\" (UID: \"f35b9d6e-1e57-4e7d-812c-8e662e652759\") " Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.575168 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7" (OuterVolumeSpecName: "kube-api-access-896h7") pod "f35b9d6e-1e57-4e7d-812c-8e662e652759" (UID: "f35b9d6e-1e57-4e7d-812c-8e662e652759"). InnerVolumeSpecName "kube-api-access-896h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.604452 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35b9d6e-1e57-4e7d-812c-8e662e652759" (UID: "f35b9d6e-1e57-4e7d-812c-8e662e652759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.629680 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data" (OuterVolumeSpecName: "config-data") pod "f35b9d6e-1e57-4e7d-812c-8e662e652759" (UID: "f35b9d6e-1e57-4e7d-812c-8e662e652759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.672467 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.672507 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35b9d6e-1e57-4e7d-812c-8e662e652759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:04 crc kubenswrapper[4729]: I0127 14:33:04.672520 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-896h7\" (UniqueName: \"kubernetes.io/projected/f35b9d6e-1e57-4e7d-812c-8e662e652759-kube-api-access-896h7\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.108013 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kgdx4" event={"ID":"f35b9d6e-1e57-4e7d-812c-8e662e652759","Type":"ContainerDied","Data":"70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7"} Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.108072 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c34a4ef564cd02bd39f6a63eca8368cebbe72b6e8e3d6300354359e40368d7" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.108099 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kgdx4" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.416629 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417092 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="extract-content" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417111 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="extract-content" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417121 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44c6dc1-a783-4680-aa97-c68f4c2a435e" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417127 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44c6dc1-a783-4680-aa97-c68f4c2a435e" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417140 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417146 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417160 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="registry-server" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417166 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="registry-server" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417175 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35b9d6e-1e57-4e7d-812c-8e662e652759" containerName="keystone-db-sync" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417180 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35b9d6e-1e57-4e7d-812c-8e662e652759" containerName="keystone-db-sync" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417191 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec91797f-3812-4509-b9c1-dbc72bb4576c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417197 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec91797f-3812-4509-b9c1-dbc72bb4576c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417216 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f824a03c-8320-4c09-83ab-7bf997460ad5" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417222 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f824a03c-8320-4c09-83ab-7bf997460ad5" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417230 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="extract-utilities" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417236 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="extract-utilities" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417252 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="init" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417258 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="init" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417266 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cb86fa-4640-4a25-a97e-0212029e2d54" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417271 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cb86fa-4640-4a25-a97e-0212029e2d54" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417283 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db170cee-4747-486f-9e1c-ed91b358127c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417289 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="db170cee-4747-486f-9e1c-ed91b358127c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417298 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59aa2e6-ef2f-459c-9db7-0765405fe2e7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417304 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59aa2e6-ef2f-459c-9db7-0765405fe2e7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: E0127 14:33:05.417316 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="dnsmasq-dns" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417321 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="dnsmasq-dns" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417492 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59aa2e6-ef2f-459c-9db7-0765405fe2e7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417501 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f824a03c-8320-4c09-83ab-7bf997460ad5" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417515 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35b9d6e-1e57-4e7d-812c-8e662e652759" containerName="keystone-db-sync" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417522 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18b97c1-8f9b-411e-ad64-a8aa04c88e0e" containerName="registry-server" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417538 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="db170cee-4747-486f-9e1c-ed91b358127c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417549 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417557 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cb86fa-4640-4a25-a97e-0212029e2d54" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417568 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec91797f-3812-4509-b9c1-dbc72bb4576c" containerName="mariadb-database-create" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417576 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44c6dc1-a783-4680-aa97-c68f4c2a435e" containerName="mariadb-account-create-update" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.417587 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc32fec2-cfc7-41b5-a6a7-6ebf12285ca3" containerName="dnsmasq-dns" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.422590 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.432173 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.473728 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8ww22"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.475447 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.486338 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.486563 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.486721 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.486997 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sqzc6" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.487289 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.490322 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.490403 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.490468 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.490611 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.492239 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.492371 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svhp6\" (UniqueName: \"kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.512822 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8ww22"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.534746 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6cnrq"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.536396 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.539582 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.539767 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-blqsf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.561956 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6cnrq"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594670 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594705 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpqbz\" (UniqueName: \"kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594740 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594785 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhp6\" (UniqueName: \"kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594850 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594962 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.594995 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.595035 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.595075 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.595114 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.595139 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.596467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.597425 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.597711 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.598214 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.598227 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.624765 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhp6\" (UniqueName: \"kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6\") pod \"dnsmasq-dns-5b868669f-2b7tf\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.689841 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-f9tpr"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697133 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697207 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697233 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpqbz\" (UniqueName: \"kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697306 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697330 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697416 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697470 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697516 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.697547 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9jl\" (UniqueName: \"kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.712319 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f9tpr"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.712432 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.713789 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.716315 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.717702 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.718196 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.719591 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.719955 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p57lw" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.720691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.725565 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.730467 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpqbz\" (UniqueName: \"kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz\") pod \"keystone-bootstrap-8ww22\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.747228 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.799825 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.799886 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.799952 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.799999 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.800030 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.800047 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.800073 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jhn\" (UniqueName: \"kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.800143 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9jl\" (UniqueName: \"kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.800172 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.806497 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.807574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.813514 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.828946 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.858511 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9jl\" (UniqueName: \"kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl\") pod \"heat-db-sync-6cnrq\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.859059 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6cnrq" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.876078 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dh77n"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.877606 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.883929 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.884158 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.889316 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gfn2v" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.902304 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.902569 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.902723 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.902821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.902967 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jhn\" (UniqueName: \"kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.903410 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.908987 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.924737 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.927434 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.948024 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.948581 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hw6n4"] Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.950370 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.956578 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.957341 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.957724 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.957957 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-75vwn" Jan 27 14:33:05 crc kubenswrapper[4729]: I0127 14:33:05.971554 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jhn\" (UniqueName: \"kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn\") pod \"cinder-db-sync-f9tpr\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.042983 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043152 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx47f\" (UniqueName: \"kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043391 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043608 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4f9z\" (UniqueName: \"kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043771 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043925 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.044341 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hw6n4"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.043986 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.072547 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226010 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226261 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226408 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx47f\" (UniqueName: \"kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226591 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226782 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4f9z\" (UniqueName: \"kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226936 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.226965 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.230741 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.237853 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.239933 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.263662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.302791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.305571 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.314536 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.320478 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.322191 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx47f\" (UniqueName: \"kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f\") pod \"placement-db-sync-hw6n4\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.333474 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dh77n"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.337227 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4f9z\" (UniqueName: \"kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z\") pod \"neutron-db-sync-dh77n\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.375991 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.405396 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dh77n" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.443469 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfghc\" (UniqueName: \"kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.443831 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.443960 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.444430 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.444507 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.445135 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.462296 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6t9vl"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.463855 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.465248 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.470815 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-46vg5" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.471043 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.507152 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6t9vl"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.542979 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553375 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553467 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553527 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553607 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfghc\" (UniqueName: \"kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553628 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553692 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553781 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwj27\" (UniqueName: \"kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553889 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.553943 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.555665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.555735 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.556033 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.556178 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.556520 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.557284 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.569845 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.570175 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.584485 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.596823 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfghc\" (UniqueName: \"kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc\") pod \"dnsmasq-dns-cf78879c9-klp4v\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.636603 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658631 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658712 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658738 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658791 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwj27\" (UniqueName: \"kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658857 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658971 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.658997 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.659105 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.659161 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.659209 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzmr\" (UniqueName: \"kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.666386 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.666547 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.706856 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwj27\" (UniqueName: \"kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27\") pod \"barbican-db-sync-6t9vl\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.744135 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.764737 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.764952 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.765051 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.765101 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzmr\" (UniqueName: \"kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.765222 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.765281 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.766153 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.766672 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.767744 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.771351 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.771437 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.773278 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.779493 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.791790 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzmr\" (UniqueName: \"kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr\") pod \"ceilometer-0\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " pod="openstack/ceilometer-0" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.815014 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.833874 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8ww22"] Jan 27 14:33:06 crc kubenswrapper[4729]: I0127 14:33:06.903562 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.046161 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6cnrq"] Jan 27 14:33:07 crc kubenswrapper[4729]: W0127 14:33:07.077857 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ddc1dc_d7c8_4a7a_9467_ee0048d258c9.slice/crio-dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481 WatchSource:0}: Error finding container dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481: Status 404 returned error can't find the container with id dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481 Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.236794 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6cnrq" event={"ID":"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9","Type":"ContainerStarted","Data":"dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481"} Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.243511 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" podUID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" containerName="init" containerID="cri-o://100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b" gracePeriod=10 Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.243620 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" event={"ID":"f6b05e13-e766-46f5-90f1-7ad6cab977fb","Type":"ContainerStarted","Data":"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b"} Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.243647 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" event={"ID":"f6b05e13-e766-46f5-90f1-7ad6cab977fb","Type":"ContainerStarted","Data":"e351164636105c3af94954ebb6b566438fa9bdd6169f4bc1227a2829a9bf6aa5"} Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.250331 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8ww22" event={"ID":"9fec2ad5-7f25-431f-9b19-b0206df969ef","Type":"ContainerStarted","Data":"5141da70235ba124af3a8095cde146cbdcb5dc5ddd69b8f842deba750dcf49d0"} Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.283611 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dh77n"] Jan 27 14:33:07 crc kubenswrapper[4729]: W0127 14:33:07.311366 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36bba371_e800_414e_8523_51e905e6d074.slice/crio-9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f WatchSource:0}: Error finding container 9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f: Status 404 returned error can't find the container with id 9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.313398 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f9tpr"] Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.753088 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.806677 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hw6n4"] Jan 27 14:33:07 crc kubenswrapper[4729]: W0127 14:33:07.941147 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c157a2_c017_4ace_bff5_50dfef32c990.slice/crio-e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52 WatchSource:0}: Error finding container e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52: Status 404 returned error can't find the container with id e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52 Jan 27 14:33:07 crc kubenswrapper[4729]: I0127 14:33:07.959394 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.007216 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6t9vl"] Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.069091 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:33:08 crc kubenswrapper[4729]: E0127 14:33:08.069791 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.075422 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.120815 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.120875 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.120986 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.121059 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.121083 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svhp6\" (UniqueName: \"kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.121387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb\") pod \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\" (UID: \"f6b05e13-e766-46f5-90f1-7ad6cab977fb\") " Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.167918 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.191333 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6" (OuterVolumeSpecName: "kube-api-access-svhp6") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "kube-api-access-svhp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.230554 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svhp6\" (UniqueName: \"kubernetes.io/projected/f6b05e13-e766-46f5-90f1-7ad6cab977fb-kube-api-access-svhp6\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.230627 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.284650 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.298319 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dh77n" event={"ID":"36bba371-e800-414e-8523-51e905e6d074","Type":"ContainerStarted","Data":"8cbf1ffe2c846c7fddb8115a76cdfe2c3b5726529c1f610e94be15ff9e57dc17"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.298398 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dh77n" event={"ID":"36bba371-e800-414e-8523-51e905e6d074","Type":"ContainerStarted","Data":"9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.313189 4729 generic.go:334] "Generic (PLEG): container finished" podID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" containerID="100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b" exitCode=0 Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.313328 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.314118 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" event={"ID":"f6b05e13-e766-46f5-90f1-7ad6cab977fb","Type":"ContainerDied","Data":"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.314199 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-2b7tf" event={"ID":"f6b05e13-e766-46f5-90f1-7ad6cab977fb","Type":"ContainerDied","Data":"e351164636105c3af94954ebb6b566438fa9bdd6169f4bc1227a2829a9bf6aa5"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.314225 4729 scope.go:117] "RemoveContainer" containerID="100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.331268 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dh77n" podStartSLOduration=3.331242237 podStartE2EDuration="3.331242237s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:08.326960819 +0000 UTC m=+1674.911151833" watchObservedRunningTime="2026-01-27 14:33:08.331242237 +0000 UTC m=+1674.915433251" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.339047 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8ww22" event={"ID":"9fec2ad5-7f25-431f-9b19-b0206df969ef","Type":"ContainerStarted","Data":"5ac6f89f24eeeede04efd098de369bdc1a2d23a92357beb7e0e20e446cd5072e"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.378763 4729 scope.go:117] "RemoveContainer" containerID="100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b" Jan 27 14:33:08 crc kubenswrapper[4729]: E0127 14:33:08.379171 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b\": container with ID starting with 100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b not found: ID does not exist" containerID="100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.379211 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b"} err="failed to get container status \"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b\": rpc error: code = NotFound desc = could not find container \"100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b\": container with ID starting with 100e31c07f08daf9385c53b3969e1333f1785cca8495afc407ac2a11db29ba1b not found: ID does not exist" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.382295 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hw6n4" event={"ID":"bd20345f-846e-4b32-ae20-dcfd968b207d","Type":"ContainerStarted","Data":"689cc1b2a64d699fa4445cf7cf860a76b8a5087a761eb440855537bf57d31dd4"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.384687 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f9tpr" event={"ID":"f7e4c134-9472-463f-b7be-226acbf7954b","Type":"ContainerStarted","Data":"403a5e4a35a0eaa941d3e667c37dbd789384968613294aec41ed3c07c6e988e4"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.393350 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6t9vl" event={"ID":"99c157a2-c017-4ace-bff5-50dfef32c990","Type":"ContainerStarted","Data":"e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.393328 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8ww22" podStartSLOduration=3.393300134 podStartE2EDuration="3.393300134s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:08.36135929 +0000 UTC m=+1674.945550324" watchObservedRunningTime="2026-01-27 14:33:08.393300134 +0000 UTC m=+1674.977491148" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.403962 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerStarted","Data":"1e8c3490ed7fc07264fb17eea7a0185a5b81c8b2606b85c05807cc8973f5074b"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.416085 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" event={"ID":"bbe84ff4-e9ce-40bb-adea-426979dbd7c3","Type":"ContainerStarted","Data":"dc54c08b6e335ee6a70197f852e060bcf8923142082a5522c649748207f782c7"} Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.438602 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config" (OuterVolumeSpecName: "config") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.445410 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.460787 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.465307 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f6b05e13-e766-46f5-90f1-7ad6cab977fb" (UID: "f6b05e13-e766-46f5-90f1-7ad6cab977fb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.537642 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.538053 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.538062 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.538071 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f6b05e13-e766-46f5-90f1-7ad6cab977fb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.781294 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:08 crc kubenswrapper[4729]: I0127 14:33:08.802662 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-2b7tf"] Jan 27 14:33:09 crc kubenswrapper[4729]: I0127 14:33:09.443650 4729 generic.go:334] "Generic (PLEG): container finished" podID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerID="cf98a09bcbaac99fe85f5b248a998c2f55b689641e8037cbfc56a70bcb6f3f49" exitCode=0 Jan 27 14:33:09 crc kubenswrapper[4729]: I0127 14:33:09.443752 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" event={"ID":"bbe84ff4-e9ce-40bb-adea-426979dbd7c3","Type":"ContainerDied","Data":"cf98a09bcbaac99fe85f5b248a998c2f55b689641e8037cbfc56a70bcb6f3f49"} Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.073874 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" path="/var/lib/kubelet/pods/f6b05e13-e766-46f5-90f1-7ad6cab977fb/volumes" Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.459904 4729 generic.go:334] "Generic (PLEG): container finished" podID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" containerID="b2b782c35a7c78c09c5bd3b2ae2b768ed5bf1923baed5a33e7aee56cb34c2893" exitCode=0 Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.460048 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mvsqd" event={"ID":"1f2d8fdf-9710-4e95-a733-8ce7f61951eb","Type":"ContainerDied","Data":"b2b782c35a7c78c09c5bd3b2ae2b768ed5bf1923baed5a33e7aee56cb34c2893"} Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.467492 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" event={"ID":"bbe84ff4-e9ce-40bb-adea-426979dbd7c3","Type":"ContainerStarted","Data":"6b93d5d14331b2ccf9c7785d5ea009d81a7437b3281e693e628c3543fe992d79"} Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.468073 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:10 crc kubenswrapper[4729]: I0127 14:33:10.512109 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" podStartSLOduration=5.512088096 podStartE2EDuration="5.512088096s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:10.500039643 +0000 UTC m=+1677.084230647" watchObservedRunningTime="2026-01-27 14:33:10.512088096 +0000 UTC m=+1677.096279100" Jan 27 14:33:16 crc kubenswrapper[4729]: I0127 14:33:16.637085 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:16 crc kubenswrapper[4729]: I0127 14:33:16.734204 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:33:16 crc kubenswrapper[4729]: I0127 14:33:16.735669 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" containerID="cri-o://fcfd4fd5cb2a6f9c9c691c9c41b43a38f61d2ad2553366a98efd53f1423b27b7" gracePeriod=10 Jan 27 14:33:17 crc kubenswrapper[4729]: I0127 14:33:17.492483 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Jan 27 14:33:18 crc kubenswrapper[4729]: I0127 14:33:18.573978 4729 generic.go:334] "Generic (PLEG): container finished" podID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerID="fcfd4fd5cb2a6f9c9c691c9c41b43a38f61d2ad2553366a98efd53f1423b27b7" exitCode=0 Jan 27 14:33:18 crc kubenswrapper[4729]: I0127 14:33:18.574059 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" event={"ID":"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb","Type":"ContainerDied","Data":"fcfd4fd5cb2a6f9c9c691c9c41b43a38f61d2ad2553366a98efd53f1423b27b7"} Jan 27 14:33:19 crc kubenswrapper[4729]: I0127 14:33:19.051346 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:33:19 crc kubenswrapper[4729]: E0127 14:33:19.051707 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:19 crc kubenswrapper[4729]: E0127 14:33:19.905157 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 27 14:33:19 crc kubenswrapper[4729]: E0127 14:33:19.905831 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cx47f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-hw6n4_openstack(bd20345f-846e-4b32-ae20-dcfd968b207d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:33:19 crc kubenswrapper[4729]: E0127 14:33:19.907354 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-hw6n4" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" Jan 27 14:33:20 crc kubenswrapper[4729]: E0127 14:33:20.597689 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-hw6n4" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" Jan 27 14:33:22 crc kubenswrapper[4729]: I0127 14:33:22.492239 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Jan 27 14:33:22 crc kubenswrapper[4729]: I0127 14:33:22.616333 4729 generic.go:334] "Generic (PLEG): container finished" podID="9fec2ad5-7f25-431f-9b19-b0206df969ef" containerID="5ac6f89f24eeeede04efd098de369bdc1a2d23a92357beb7e0e20e446cd5072e" exitCode=0 Jan 27 14:33:22 crc kubenswrapper[4729]: I0127 14:33:22.616436 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8ww22" event={"ID":"9fec2ad5-7f25-431f-9b19-b0206df969ef","Type":"ContainerDied","Data":"5ac6f89f24eeeede04efd098de369bdc1a2d23a92357beb7e0e20e446cd5072e"} Jan 27 14:33:24 crc kubenswrapper[4729]: E0127 14:33:24.736509 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 27 14:33:24 crc kubenswrapper[4729]: E0127 14:33:24.737281 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s9jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6cnrq_openstack(d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:33:24 crc kubenswrapper[4729]: E0127 14:33:24.738523 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6cnrq" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" Jan 27 14:33:25 crc kubenswrapper[4729]: E0127 14:33:25.646373 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-6cnrq" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" Jan 27 14:33:27 crc kubenswrapper[4729]: I0127 14:33:27.492232 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Jan 27 14:33:27 crc kubenswrapper[4729]: I0127 14:33:27.492680 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:33:31 crc kubenswrapper[4729]: I0127 14:33:31.051304 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:33:31 crc kubenswrapper[4729]: E0127 14:33:31.052925 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:32 crc kubenswrapper[4729]: I0127 14:33:32.492268 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: connect: connection refused" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.779177 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mvsqd" event={"ID":"1f2d8fdf-9710-4e95-a733-8ce7f61951eb","Type":"ContainerDied","Data":"d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf"} Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.779527 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mvsqd" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.779891 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d490cbe338b0e26671d69aba007aa7c557f8b602860faeaf1274a5e6daff13cf" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.784992 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8ww22" event={"ID":"9fec2ad5-7f25-431f-9b19-b0206df969ef","Type":"ContainerDied","Data":"5141da70235ba124af3a8095cde146cbdcb5dc5ddd69b8f842deba750dcf49d0"} Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.785074 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5141da70235ba124af3a8095cde146cbdcb5dc5ddd69b8f842deba750dcf49d0" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.792465 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919334 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919444 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6dbr\" (UniqueName: \"kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr\") pod \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919476 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919556 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data\") pod \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919597 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919653 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919700 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919804 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpqbz\" (UniqueName: \"kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz\") pod \"9fec2ad5-7f25-431f-9b19-b0206df969ef\" (UID: \"9fec2ad5-7f25-431f-9b19-b0206df969ef\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919857 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data\") pod \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.919898 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle\") pod \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\" (UID: \"1f2d8fdf-9710-4e95-a733-8ce7f61951eb\") " Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.931384 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts" (OuterVolumeSpecName: "scripts") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.931431 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1f2d8fdf-9710-4e95-a733-8ce7f61951eb" (UID: "1f2d8fdf-9710-4e95-a733-8ce7f61951eb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.932197 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr" (OuterVolumeSpecName: "kube-api-access-x6dbr") pod "1f2d8fdf-9710-4e95-a733-8ce7f61951eb" (UID: "1f2d8fdf-9710-4e95-a733-8ce7f61951eb"). InnerVolumeSpecName "kube-api-access-x6dbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.933573 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.954649 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz" (OuterVolumeSpecName: "kube-api-access-cpqbz") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "kube-api-access-cpqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.962804 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.964289 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.980089 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2d8fdf-9710-4e95-a733-8ce7f61951eb" (UID: "1f2d8fdf-9710-4e95-a733-8ce7f61951eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:36 crc kubenswrapper[4729]: I0127 14:33:36.994208 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data" (OuterVolumeSpecName: "config-data") pod "9fec2ad5-7f25-431f-9b19-b0206df969ef" (UID: "9fec2ad5-7f25-431f-9b19-b0206df969ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022358 4729 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022393 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpqbz\" (UniqueName: \"kubernetes.io/projected/9fec2ad5-7f25-431f-9b19-b0206df969ef-kube-api-access-cpqbz\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022406 4729 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022418 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022427 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022436 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6dbr\" (UniqueName: \"kubernetes.io/projected/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-kube-api-access-x6dbr\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022444 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022452 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.022459 4729 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9fec2ad5-7f25-431f-9b19-b0206df969ef-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.027453 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data" (OuterVolumeSpecName: "config-data") pod "1f2d8fdf-9710-4e95-a733-8ce7f61951eb" (UID: "1f2d8fdf-9710-4e95-a733-8ce7f61951eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.125198 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2d8fdf-9710-4e95-a733-8ce7f61951eb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:37 crc kubenswrapper[4729]: E0127 14:33:37.755989 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 27 14:33:37 crc kubenswrapper[4729]: E0127 14:33:37.756179 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwj27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6t9vl_openstack(99c157a2-c017-4ace-bff5-50dfef32c990): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:33:37 crc kubenswrapper[4729]: E0127 14:33:37.757391 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6t9vl" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.796540 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mvsqd" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.796544 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8ww22" Jan 27 14:33:37 crc kubenswrapper[4729]: E0127 14:33:37.806464 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-6t9vl" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.970031 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8ww22"] Jan 27 14:33:37 crc kubenswrapper[4729]: I0127 14:33:37.978612 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8ww22"] Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.068086 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fec2ad5-7f25-431f-9b19-b0206df969ef" path="/var/lib/kubelet/pods/9fec2ad5-7f25-431f-9b19-b0206df969ef/volumes" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.083861 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-whv27"] Jan 27 14:33:38 crc kubenswrapper[4729]: E0127 14:33:38.084440 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" containerName="init" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084455 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" containerName="init" Jan 27 14:33:38 crc kubenswrapper[4729]: E0127 14:33:38.084469 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" containerName="glance-db-sync" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084475 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" containerName="glance-db-sync" Jan 27 14:33:38 crc kubenswrapper[4729]: E0127 14:33:38.084493 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fec2ad5-7f25-431f-9b19-b0206df969ef" containerName="keystone-bootstrap" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084499 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fec2ad5-7f25-431f-9b19-b0206df969ef" containerName="keystone-bootstrap" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084709 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" containerName="glance-db-sync" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084727 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fec2ad5-7f25-431f-9b19-b0206df969ef" containerName="keystone-bootstrap" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.084750 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6b05e13-e766-46f5-90f1-7ad6cab977fb" containerName="init" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.085562 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.088280 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.097750 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sqzc6" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.097984 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.098306 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.098482 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.103201 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-whv27"] Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271228 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271292 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271385 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271409 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271454 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.271531 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnlz\" (UniqueName: \"kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.302231 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.309318 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.337857 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373409 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373497 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373656 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373701 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373791 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.373941 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvnlz\" (UniqueName: \"kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.383208 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.383525 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.384184 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.385260 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.392179 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.404773 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvnlz\" (UniqueName: \"kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz\") pod \"keystone-bootstrap-whv27\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.412577 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476075 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476470 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476502 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476603 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrs9\" (UniqueName: \"kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476682 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.476712 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.578766 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hrs9\" (UniqueName: \"kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.578855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.578932 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.579102 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.579167 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.579194 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.580199 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.581259 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.581284 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.582193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.582257 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.601929 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hrs9\" (UniqueName: \"kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9\") pod \"dnsmasq-dns-56df8fb6b7-s8s8d\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: I0127 14:33:38.639106 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:38 crc kubenswrapper[4729]: E0127 14:33:38.929793 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 27 14:33:38 crc kubenswrapper[4729]: E0127 14:33:38.929986 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n648h59h5cfh698h5ddh598h55fh68hbch5bbh67bh85h99hf6h697hf8hbfh65h67h646h664h57fh547h669hfhfdh56ch8dh587h568h68dh68fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bzmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a9b28343-b8b8-4b61-9c61-0003f8ca6556): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.191236 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.194486 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.202454 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.202477 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.203364 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fp622" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.211588 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.300686 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301130 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301180 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301205 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301252 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301316 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.301457 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmvb\" (UniqueName: \"kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403301 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxmvb\" (UniqueName: \"kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403362 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403395 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403468 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403487 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.403521 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.404895 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.407007 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.411824 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.412195 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.414951 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.428993 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxmvb\" (UniqueName: \"kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.482828 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.482908 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/680b0567a76c6d555c8b8d8fdb3c5a567c93fb361c00ca1e51ff7c6b12fbca95/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.570154 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.572281 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.575403 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.592328 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.630259 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.712748 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vdqf\" (UniqueName: \"kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.712982 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.713084 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.713132 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.713198 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.713301 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.713369 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815295 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815440 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vdqf\" (UniqueName: \"kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815732 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815832 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.815904 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.816096 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.819174 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.819228 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16daece8afd0a3836c1be067eae3de4cdad06a409abefe2c3a15f5053658ec82/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.821316 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.822546 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.823851 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.833406 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.840572 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.844719 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vdqf\" (UniqueName: \"kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:39 crc kubenswrapper[4729]: I0127 14:33:39.922199 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:40 crc kubenswrapper[4729]: I0127 14:33:40.224289 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:41 crc kubenswrapper[4729]: E0127 14:33:41.151154 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 27 14:33:41 crc kubenswrapper[4729]: E0127 14:33:41.151575 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67jhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-f9tpr_openstack(f7e4c134-9472-463f-b7be-226acbf7954b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 14:33:41 crc kubenswrapper[4729]: E0127 14:33:41.152775 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-f9tpr" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.360311 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.456744 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.456903 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.456923 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.457127 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.457162 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.457180 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xdw\" (UniqueName: \"kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw\") pod \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\" (UID: \"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb\") " Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.479422 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw" (OuterVolumeSpecName: "kube-api-access-g6xdw") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "kube-api-access-g6xdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.561773 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xdw\" (UniqueName: \"kubernetes.io/projected/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-kube-api-access-g6xdw\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.591474 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.635200 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.640389 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.641347 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.648477 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config" (OuterVolumeSpecName: "config") pod "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" (UID: "9e66cbe5-29e9-407c-be54-e6dc2b4e84bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.675045 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.675082 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.675095 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.675108 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.675123 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.895251 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" Jan 27 14:33:41 crc kubenswrapper[4729]: E0127 14:33:41.904645 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-f9tpr" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.904832 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" event={"ID":"9e66cbe5-29e9-407c-be54-e6dc2b4e84bb","Type":"ContainerDied","Data":"fcd1f3ca774c109dc8fe0b1335a86e5b5c8834c92479f2772cf70ca1fff81002"} Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.905007 4729 scope.go:117] "RemoveContainer" containerID="fcfd4fd5cb2a6f9c9c691c9c41b43a38f61d2ad2553366a98efd53f1423b27b7" Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.924435 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:33:41 crc kubenswrapper[4729]: I0127 14:33:41.980347 4729 scope.go:117] "RemoveContainer" containerID="433038ebea6b71f9cbacb6c00e2262bb9629117dc95b3f310cf5190eea02e9d0" Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.101069 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.174775 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-lgdfh"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.208961 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.288687 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.301822 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-whv27"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.468107 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:42 crc kubenswrapper[4729]: W0127 14:33:42.482269 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee4b7bbf_2ab5_4142_b4c6_af9ca60c98df.slice/crio-d00c27b7acb141d207dd7e8421ac0c8fb97aa066124be5c6f71308511a238132 WatchSource:0}: Error finding container d00c27b7acb141d207dd7e8421ac0c8fb97aa066124be5c6f71308511a238132: Status 404 returned error can't find the container with id d00c27b7acb141d207dd7e8421ac0c8fb97aa066124be5c6f71308511a238132 Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.493054 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-lgdfh" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.176:5353: i/o timeout" Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.616500 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.905681 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-whv27" event={"ID":"cac17ab6-7586-488e-b302-d3bf641b56ab","Type":"ContainerStarted","Data":"ac89ba3f1e3170ea6990200b8ed3951dbcafe205c30ec887514d6ef579287b51"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.906146 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-whv27" event={"ID":"cac17ab6-7586-488e-b302-d3bf641b56ab","Type":"ContainerStarted","Data":"a3447421159d97458dd2e692eb87e8d44b8a29e0175272ece3f988fd4b765df5"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.914438 4729 generic.go:334] "Generic (PLEG): container finished" podID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerID="dc858358caa87c48d541f1298e5c8e66d360e7f949d596bbe14d3e47febbd8e9" exitCode=0 Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.914511 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" event={"ID":"cfa8c070-9c22-42d8-beee-59f6cda90fb0","Type":"ContainerDied","Data":"dc858358caa87c48d541f1298e5c8e66d360e7f949d596bbe14d3e47febbd8e9"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.914542 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" event={"ID":"cfa8c070-9c22-42d8-beee-59f6cda90fb0","Type":"ContainerStarted","Data":"ef853e21941c75f697b90e3b69c0b91993d986d0a086299ce4d88622dd5acc3e"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.917185 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerStarted","Data":"d00c27b7acb141d207dd7e8421ac0c8fb97aa066124be5c6f71308511a238132"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.919374 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerStarted","Data":"07761bbd8cbf13a5839f3ba11ad2da3dc13fc3a72426cc5b253f7a0c0a4e42cc"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.921162 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hw6n4" event={"ID":"bd20345f-846e-4b32-ae20-dcfd968b207d","Type":"ContainerStarted","Data":"5a19886819a410481f2f8bb7cfe997fd5637aba3651349a06580e98e101201a5"} Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.937563 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-whv27" podStartSLOduration=4.937540315 podStartE2EDuration="4.937540315s" podCreationTimestamp="2026-01-27 14:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:42.930264124 +0000 UTC m=+1709.514455138" watchObservedRunningTime="2026-01-27 14:33:42.937540315 +0000 UTC m=+1709.521731319" Jan 27 14:33:42 crc kubenswrapper[4729]: I0127 14:33:42.940234 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6cnrq" event={"ID":"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9","Type":"ContainerStarted","Data":"ac7329e4e741fe65f9695bd0e692d86682d0f6404db0ce56ab8d15f7a7ee265a"} Jan 27 14:33:43 crc kubenswrapper[4729]: I0127 14:33:42.996191 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hw6n4" podStartSLOduration=4.250392712 podStartE2EDuration="37.996173607s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="2026-01-27 14:33:07.768394665 +0000 UTC m=+1674.352585689" lastFinishedPulling="2026-01-27 14:33:41.51417558 +0000 UTC m=+1708.098366584" observedRunningTime="2026-01-27 14:33:42.952628782 +0000 UTC m=+1709.536819786" watchObservedRunningTime="2026-01-27 14:33:42.996173607 +0000 UTC m=+1709.580364611" Jan 27 14:33:43 crc kubenswrapper[4729]: I0127 14:33:43.047044 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6cnrq" podStartSLOduration=3.606048006 podStartE2EDuration="38.047018892s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="2026-01-27 14:33:07.082558023 +0000 UTC m=+1673.666749027" lastFinishedPulling="2026-01-27 14:33:41.523528909 +0000 UTC m=+1708.107719913" observedRunningTime="2026-01-27 14:33:43.0002724 +0000 UTC m=+1709.584463404" watchObservedRunningTime="2026-01-27 14:33:43.047018892 +0000 UTC m=+1709.631209916" Jan 27 14:33:43 crc kubenswrapper[4729]: I0127 14:33:43.968488 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerStarted","Data":"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5"} Jan 27 14:33:43 crc kubenswrapper[4729]: I0127 14:33:43.972301 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerStarted","Data":"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23"} Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.114117 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" path="/var/lib/kubelet/pods/9e66cbe5-29e9-407c-be54-e6dc2b4e84bb/volumes" Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.984963 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerStarted","Data":"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760"} Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.985202 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-log" containerID="cri-o://1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" gracePeriod=30 Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.985232 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-httpd" containerID="cri-o://2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" gracePeriod=30 Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.989471 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerStarted","Data":"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885"} Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.989709 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-log" containerID="cri-o://afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" gracePeriod=30 Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.989828 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-httpd" containerID="cri-o://a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" gracePeriod=30 Jan 27 14:33:44 crc kubenswrapper[4729]: I0127 14:33:44.999186 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerStarted","Data":"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224"} Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.002263 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" event={"ID":"cfa8c070-9c22-42d8-beee-59f6cda90fb0","Type":"ContainerStarted","Data":"356261c680d35fde5ab345835cf1d45e880811180881256176404acaa9f35d70"} Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.002440 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.016098 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.016077505 podStartE2EDuration="7.016077505s" podCreationTimestamp="2026-01-27 14:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:45.013677459 +0000 UTC m=+1711.597868473" watchObservedRunningTime="2026-01-27 14:33:45.016077505 +0000 UTC m=+1711.600268509" Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.040978 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" podStartSLOduration=7.040955883 podStartE2EDuration="7.040955883s" podCreationTimestamp="2026-01-27 14:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:45.036984423 +0000 UTC m=+1711.621175427" watchObservedRunningTime="2026-01-27 14:33:45.040955883 +0000 UTC m=+1711.625146897" Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.065118 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.06508097 podStartE2EDuration="7.06508097s" podCreationTimestamp="2026-01-27 14:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:45.061952823 +0000 UTC m=+1711.646143837" watchObservedRunningTime="2026-01-27 14:33:45.06508097 +0000 UTC m=+1711.649271984" Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.912938 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:45 crc kubenswrapper[4729]: I0127 14:33:45.922540 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.011578 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmvb\" (UniqueName: \"kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.011646 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.011680 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.011729 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012463 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012496 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012536 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012617 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012802 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012845 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts\") pod \"fcc23b04-c403-4163-b119-715334058fbc\" (UID: \"fcc23b04-c403-4163-b119-715334058fbc\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.012890 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.013289 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.013659 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs" (OuterVolumeSpecName: "logs") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.015401 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vdqf\" (UniqueName: \"kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.015442 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.015998 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs\") pod \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\" (UID: \"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df\") " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.016984 4729 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.017008 4729 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.017020 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcc23b04-c403-4163-b119-715334058fbc-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.017316 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs" (OuterVolumeSpecName: "logs") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.035520 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts" (OuterVolumeSpecName: "scripts") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.035676 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb" (OuterVolumeSpecName: "kube-api-access-fxmvb") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "kube-api-access-fxmvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.035745 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts" (OuterVolumeSpecName: "scripts") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040238 4729 generic.go:334] "Generic (PLEG): container finished" podID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerID="2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" exitCode=143 Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040274 4729 generic.go:334] "Generic (PLEG): container finished" podID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerID="1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" exitCode=143 Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerDied","Data":"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040340 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerDied","Data":"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040349 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df","Type":"ContainerDied","Data":"d00c27b7acb141d207dd7e8421ac0c8fb97aa066124be5c6f71308511a238132"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040364 4729 scope.go:117] "RemoveContainer" containerID="2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.040492 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.051463 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.051968 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.061564 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12" (OuterVolumeSpecName: "glance") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.061813 4729 generic.go:334] "Generic (PLEG): container finished" podID="fcc23b04-c403-4163-b119-715334058fbc" containerID="a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" exitCode=143 Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.061857 4729 generic.go:334] "Generic (PLEG): container finished" podID="fcc23b04-c403-4163-b119-715334058fbc" containerID="afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" exitCode=143 Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.061939 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.068522 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf" (OuterVolumeSpecName: "kube-api-access-9vdqf") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "kube-api-access-9vdqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.080128 4729 scope.go:117] "RemoveContainer" containerID="1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.081025 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb" (OuterVolumeSpecName: "glance") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "pvc-57455421-4417-4d20-8801-ff0bdce950eb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.092852 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerDied","Data":"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.093193 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerDied","Data":"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.093430 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fcc23b04-c403-4163-b119-715334058fbc","Type":"ContainerDied","Data":"07761bbd8cbf13a5839f3ba11ad2da3dc13fc3a72426cc5b253f7a0c0a4e42cc"} Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.096222 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.121677 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vdqf\" (UniqueName: \"kubernetes.io/projected/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-kube-api-access-9vdqf\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122007 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122150 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxmvb\" (UniqueName: \"kubernetes.io/projected/fcc23b04-c403-4163-b119-715334058fbc-kube-api-access-fxmvb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122240 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122355 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") on node \"crc\" " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122464 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") on node \"crc\" " Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122555 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.122660 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.125255 4729 scope.go:117] "RemoveContainer" containerID="2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.125854 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760\": container with ID starting with 2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760 not found: ID does not exist" containerID="2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.125914 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760"} err="failed to get container status \"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760\": rpc error: code = NotFound desc = could not find container \"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760\": container with ID starting with 2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.125938 4729 scope.go:117] "RemoveContainer" containerID="1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.126326 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5\": container with ID starting with 1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5 not found: ID does not exist" containerID="1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.126389 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5"} err="failed to get container status \"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5\": rpc error: code = NotFound desc = could not find container \"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5\": container with ID starting with 1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.126412 4729 scope.go:117] "RemoveContainer" containerID="2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.126799 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760"} err="failed to get container status \"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760\": rpc error: code = NotFound desc = could not find container \"2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760\": container with ID starting with 2c4e82f8639d6ceef8b15a93366604106bf95e752476fb1743f927e446162760 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.126851 4729 scope.go:117] "RemoveContainer" containerID="1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.127098 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5"} err="failed to get container status \"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5\": rpc error: code = NotFound desc = could not find container \"1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5\": container with ID starting with 1cc554a401e09651b9608bfcd4f081a6ebc3de7bbd807bd846963df9fe07adc5 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.127138 4729 scope.go:117] "RemoveContainer" containerID="a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.128368 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data" (OuterVolumeSpecName: "config-data") pod "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" (UID: "ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.133004 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.152147 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.152294 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12") on node "crc" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.179511 4729 scope.go:117] "RemoveContainer" containerID="afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.188209 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.188375 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57455421-4417-4d20-8801-ff0bdce950eb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb") on node "crc" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.221538 4729 scope.go:117] "RemoveContainer" containerID="a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.222356 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885\": container with ID starting with a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885 not found: ID does not exist" containerID="a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.222408 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885"} err="failed to get container status \"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885\": rpc error: code = NotFound desc = could not find container \"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885\": container with ID starting with a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.222437 4729 scope.go:117] "RemoveContainer" containerID="afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.233143 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23\": container with ID starting with afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23 not found: ID does not exist" containerID="afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.233207 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23"} err="failed to get container status \"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23\": rpc error: code = NotFound desc = could not find container \"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23\": container with ID starting with afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.233243 4729 scope.go:117] "RemoveContainer" containerID="a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.233894 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885"} err="failed to get container status \"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885\": rpc error: code = NotFound desc = could not find container \"a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885\": container with ID starting with a039e111cf70c22bb3a87a22ee97680ea82b9fc744196e5106bf1b4890a9b885 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.233941 4729 scope.go:117] "RemoveContainer" containerID="afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.234477 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23"} err="failed to get container status \"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23\": rpc error: code = NotFound desc = could not find container \"afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23\": container with ID starting with afa49fa52b320e5fc3970d49d49836b38d1c2152e99e9bf59811f50f8de71a23 not found: ID does not exist" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.234646 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.234679 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.234698 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.234711 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.236580 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data" (OuterVolumeSpecName: "config-data") pod "fcc23b04-c403-4163-b119-715334058fbc" (UID: "fcc23b04-c403-4163-b119-715334058fbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.337362 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc23b04-c403-4163-b119-715334058fbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.390036 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.415798 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432226 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432836 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="init" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432858 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="init" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432889 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432898 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432912 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432920 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432932 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432940 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432966 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432974 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: E0127 14:33:46.432990 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.432998 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.433261 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.433280 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.433294 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc23b04-c403-4163-b119-715334058fbc" containerName="glance-httpd" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.433318 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e66cbe5-29e9-407c-be54-e6dc2b4e84bb" containerName="dnsmasq-dns" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.433331 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" containerName="glance-log" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.434824 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.438290 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.438648 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fp622" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.438823 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.452281 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.469022 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.488664 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.501111 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.512253 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.515166 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.519410 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.523361 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.535049 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.548864 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29hcd\" (UniqueName: \"kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.548940 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.548994 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.549017 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.549046 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.549105 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.549133 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.549326 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.651953 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652029 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652070 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652104 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652178 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29hcd\" (UniqueName: \"kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652232 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652289 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652310 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652335 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652416 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652444 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652479 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652513 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwfwr\" (UniqueName: \"kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.652624 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.653344 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.653635 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.656679 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.657166 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.659358 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.659414 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16daece8afd0a3836c1be067eae3de4cdad06a409abefe2c3a15f5053658ec82/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.662500 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.669663 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.688762 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29hcd\" (UniqueName: \"kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.713857 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754449 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754499 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754545 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754565 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754667 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.754703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwfwr\" (UniqueName: \"kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.756227 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.756680 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.766125 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.770433 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.777914 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.778180 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.778807 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.779149 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/680b0567a76c6d555c8b8d8fdb3c5a567c93fb361c00ca1e51ff7c6b12fbca95/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.785139 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwfwr\" (UniqueName: \"kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.790530 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.823490 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " pod="openstack/glance-default-external-api-0" Jan 27 14:33:46 crc kubenswrapper[4729]: I0127 14:33:46.864378 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:33:47 crc kubenswrapper[4729]: I0127 14:33:47.468494 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:33:47 crc kubenswrapper[4729]: I0127 14:33:47.738472 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:33:47 crc kubenswrapper[4729]: W0127 14:33:47.884989 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0885ef7c_7d5a_4f4a_80b1_5d0c1d6b7e52.slice/crio-c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d WatchSource:0}: Error finding container c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d: Status 404 returned error can't find the container with id c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.071319 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df" path="/var/lib/kubelet/pods/ee4b7bbf-2ab5-4142-b4c6-af9ca60c98df/volumes" Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.072316 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc23b04-c403-4163-b119-715334058fbc" path="/var/lib/kubelet/pods/fcc23b04-c403-4163-b119-715334058fbc/volumes" Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.119938 4729 generic.go:334] "Generic (PLEG): container finished" podID="cac17ab6-7586-488e-b302-d3bf641b56ab" containerID="ac89ba3f1e3170ea6990200b8ed3951dbcafe205c30ec887514d6ef579287b51" exitCode=0 Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.120014 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-whv27" event={"ID":"cac17ab6-7586-488e-b302-d3bf641b56ab","Type":"ContainerDied","Data":"ac89ba3f1e3170ea6990200b8ed3951dbcafe205c30ec887514d6ef579287b51"} Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.124345 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerStarted","Data":"c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d"} Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.128455 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerStarted","Data":"4ff08452eff672dbd795081df6b86e7adf636a5c02fa007484c01388b9253303"} Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.130390 4729 generic.go:334] "Generic (PLEG): container finished" podID="bd20345f-846e-4b32-ae20-dcfd968b207d" containerID="5a19886819a410481f2f8bb7cfe997fd5637aba3651349a06580e98e101201a5" exitCode=0 Jan 27 14:33:48 crc kubenswrapper[4729]: I0127 14:33:48.130422 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hw6n4" event={"ID":"bd20345f-846e-4b32-ae20-dcfd968b207d","Type":"ContainerDied","Data":"5a19886819a410481f2f8bb7cfe997fd5637aba3651349a06580e98e101201a5"} Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.183208 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerStarted","Data":"538099ac1b22a1b548e632269a4ac1a5f801ecf28b1c6e3913d34b01c6ec5efd"} Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.186990 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerStarted","Data":"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610"} Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.852790 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.959000 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle\") pod \"bd20345f-846e-4b32-ae20-dcfd968b207d\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.959165 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx47f\" (UniqueName: \"kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f\") pod \"bd20345f-846e-4b32-ae20-dcfd968b207d\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.959260 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs\") pod \"bd20345f-846e-4b32-ae20-dcfd968b207d\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.959288 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts\") pod \"bd20345f-846e-4b32-ae20-dcfd968b207d\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.959442 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data\") pod \"bd20345f-846e-4b32-ae20-dcfd968b207d\" (UID: \"bd20345f-846e-4b32-ae20-dcfd968b207d\") " Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.961478 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs" (OuterVolumeSpecName: "logs") pod "bd20345f-846e-4b32-ae20-dcfd968b207d" (UID: "bd20345f-846e-4b32-ae20-dcfd968b207d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.963237 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd20345f-846e-4b32-ae20-dcfd968b207d-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.966513 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts" (OuterVolumeSpecName: "scripts") pod "bd20345f-846e-4b32-ae20-dcfd968b207d" (UID: "bd20345f-846e-4b32-ae20-dcfd968b207d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:49 crc kubenswrapper[4729]: I0127 14:33:49.967633 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f" (OuterVolumeSpecName: "kube-api-access-cx47f") pod "bd20345f-846e-4b32-ae20-dcfd968b207d" (UID: "bd20345f-846e-4b32-ae20-dcfd968b207d"). InnerVolumeSpecName "kube-api-access-cx47f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.012692 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd20345f-846e-4b32-ae20-dcfd968b207d" (UID: "bd20345f-846e-4b32-ae20-dcfd968b207d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.066650 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx47f\" (UniqueName: \"kubernetes.io/projected/bd20345f-846e-4b32-ae20-dcfd968b207d-kube-api-access-cx47f\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.067055 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.067066 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.108050 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data" (OuterVolumeSpecName: "config-data") pod "bd20345f-846e-4b32-ae20-dcfd968b207d" (UID: "bd20345f-846e-4b32-ae20-dcfd968b207d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.169495 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd20345f-846e-4b32-ae20-dcfd968b207d-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.212097 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerStarted","Data":"9e32dd7451d221d043d9724a8f9a6b7f7d9a0369c27ad7cc736ec0d9250da2f6"} Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.218319 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerStarted","Data":"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230"} Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.221080 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hw6n4" event={"ID":"bd20345f-846e-4b32-ae20-dcfd968b207d","Type":"ContainerDied","Data":"689cc1b2a64d699fa4445cf7cf860a76b8a5087a761eb440855537bf57d31dd4"} Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.221108 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="689cc1b2a64d699fa4445cf7cf860a76b8a5087a761eb440855537bf57d31dd4" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.221155 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hw6n4" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.253048 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.253025663 podStartE2EDuration="4.253025663s" podCreationTimestamp="2026-01-27 14:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:50.238756078 +0000 UTC m=+1716.822947092" watchObservedRunningTime="2026-01-27 14:33:50.253025663 +0000 UTC m=+1716.837216687" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.291754 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.291727693 podStartE2EDuration="4.291727693s" podCreationTimestamp="2026-01-27 14:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:50.281086139 +0000 UTC m=+1716.865277143" watchObservedRunningTime="2026-01-27 14:33:50.291727693 +0000 UTC m=+1716.875918707" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.396581 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7cb844499b-jdr2d"] Jan 27 14:33:50 crc kubenswrapper[4729]: E0127 14:33:50.398160 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" containerName="placement-db-sync" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.398343 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" containerName="placement-db-sync" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.399122 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" containerName="placement-db-sync" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.405139 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.435504 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.436050 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.436600 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-75vwn" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.436987 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.437390 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.450230 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7cb844499b-jdr2d"] Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.599898 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-config-data\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.600037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-scripts\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.600107 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-logs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.600533 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-internal-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.600964 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgp9t\" (UniqueName: \"kubernetes.io/projected/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-kube-api-access-pgp9t\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.601179 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-public-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.601246 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-combined-ca-bundle\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-scripts\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703421 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-logs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703480 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-internal-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703532 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgp9t\" (UniqueName: \"kubernetes.io/projected/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-kube-api-access-pgp9t\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703584 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-public-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703609 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-combined-ca-bundle\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.703640 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-config-data\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.709765 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-internal-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.710771 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-logs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.717186 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-config-data\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.717523 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-combined-ca-bundle\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.718572 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-public-tls-certs\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.727713 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-scripts\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.746206 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgp9t\" (UniqueName: \"kubernetes.io/projected/6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2-kube-api-access-pgp9t\") pod \"placement-7cb844499b-jdr2d\" (UID: \"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2\") " pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:50 crc kubenswrapper[4729]: I0127 14:33:50.842235 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:53 crc kubenswrapper[4729]: I0127 14:33:53.641036 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:33:53 crc kubenswrapper[4729]: I0127 14:33:53.736919 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:53 crc kubenswrapper[4729]: I0127 14:33:53.737205 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="dnsmasq-dns" containerID="cri-o://6b93d5d14331b2ccf9c7785d5ea009d81a7437b3281e693e628c3543fe992d79" gracePeriod=10 Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.269238 4729 generic.go:334] "Generic (PLEG): container finished" podID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerID="6b93d5d14331b2ccf9c7785d5ea009d81a7437b3281e693e628c3543fe992d79" exitCode=0 Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.269287 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" event={"ID":"bbe84ff4-e9ce-40bb-adea-426979dbd7c3","Type":"ContainerDied","Data":"6b93d5d14331b2ccf9c7785d5ea009d81a7437b3281e693e628c3543fe992d79"} Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.801354 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.908919 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.909319 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.909376 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvnlz\" (UniqueName: \"kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.909546 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.909638 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.909837 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts\") pod \"cac17ab6-7586-488e-b302-d3bf641b56ab\" (UID: \"cac17ab6-7586-488e-b302-d3bf641b56ab\") " Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.920563 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.920710 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.920838 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz" (OuterVolumeSpecName: "kube-api-access-wvnlz") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "kube-api-access-wvnlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.938914 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts" (OuterVolumeSpecName: "scripts") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.966224 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:54 crc kubenswrapper[4729]: I0127 14:33:54.987365 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data" (OuterVolumeSpecName: "config-data") pod "cac17ab6-7586-488e-b302-d3bf641b56ab" (UID: "cac17ab6-7586-488e-b302-d3bf641b56ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012266 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012304 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012321 4729 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012335 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvnlz\" (UniqueName: \"kubernetes.io/projected/cac17ab6-7586-488e-b302-d3bf641b56ab-kube-api-access-wvnlz\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012347 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.012359 4729 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cac17ab6-7586-488e-b302-d3bf641b56ab-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.215086 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7cb844499b-jdr2d"] Jan 27 14:33:55 crc kubenswrapper[4729]: W0127 14:33:55.218728 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c1c7268_37a4_45d9_aa5d_bcf79fb55bd2.slice/crio-7f00a0b5fabf5769f3e7f4c412f6c9fcbbae4d8475c0775aac06f6fc3af575c6 WatchSource:0}: Error finding container 7f00a0b5fabf5769f3e7f4c412f6c9fcbbae4d8475c0775aac06f6fc3af575c6: Status 404 returned error can't find the container with id 7f00a0b5fabf5769f3e7f4c412f6c9fcbbae4d8475c0775aac06f6fc3af575c6 Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.243763 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.306714 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb844499b-jdr2d" event={"ID":"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2","Type":"ContainerStarted","Data":"7f00a0b5fabf5769f3e7f4c412f6c9fcbbae4d8475c0775aac06f6fc3af575c6"} Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.309868 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" event={"ID":"bbe84ff4-e9ce-40bb-adea-426979dbd7c3","Type":"ContainerDied","Data":"dc54c08b6e335ee6a70197f852e060bcf8923142082a5522c649748207f782c7"} Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.309944 4729 scope.go:117] "RemoveContainer" containerID="6b93d5d14331b2ccf9c7785d5ea009d81a7437b3281e693e628c3543fe992d79" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.309994 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-klp4v" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.313454 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-whv27" event={"ID":"cac17ab6-7586-488e-b302-d3bf641b56ab","Type":"ContainerDied","Data":"a3447421159d97458dd2e692eb87e8d44b8a29e0175272ece3f988fd4b765df5"} Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.313499 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3447421159d97458dd2e692eb87e8d44b8a29e0175272ece3f988fd4b765df5" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.313517 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-whv27" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.317862 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.317949 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.318134 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.318156 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.318219 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfghc\" (UniqueName: \"kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.318304 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.331351 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc" (OuterVolumeSpecName: "kube-api-access-gfghc") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "kube-api-access-gfghc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.350722 4729 scope.go:117] "RemoveContainer" containerID="cf98a09bcbaac99fe85f5b248a998c2f55b689641e8037cbfc56a70bcb6f3f49" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.394451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.396705 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config" (OuterVolumeSpecName: "config") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.400988 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.404444 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.420333 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.420600 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") pod \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\" (UID: \"bbe84ff4-e9ce-40bb-adea-426979dbd7c3\") " Jan 27 14:33:55 crc kubenswrapper[4729]: W0127 14:33:55.421231 4729 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bbe84ff4-e9ce-40bb-adea-426979dbd7c3/volumes/kubernetes.io~configmap/dns-svc Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421277 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbe84ff4-e9ce-40bb-adea-426979dbd7c3" (UID: "bbe84ff4-e9ce-40bb-adea-426979dbd7c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421655 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421688 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421772 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421789 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421802 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.421817 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfghc\" (UniqueName: \"kubernetes.io/projected/bbe84ff4-e9ce-40bb-adea-426979dbd7c3-kube-api-access-gfghc\") on node \"crc\" DevicePath \"\"" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.646464 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.659989 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-klp4v"] Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.917498 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-79f67449f-t7hgq"] Jan 27 14:33:55 crc kubenswrapper[4729]: E0127 14:33:55.918033 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="init" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.918053 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="init" Jan 27 14:33:55 crc kubenswrapper[4729]: E0127 14:33:55.918065 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="dnsmasq-dns" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.918072 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="dnsmasq-dns" Jan 27 14:33:55 crc kubenswrapper[4729]: E0127 14:33:55.918112 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cac17ab6-7586-488e-b302-d3bf641b56ab" containerName="keystone-bootstrap" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.918121 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cac17ab6-7586-488e-b302-d3bf641b56ab" containerName="keystone-bootstrap" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.918392 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" containerName="dnsmasq-dns" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.918421 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="cac17ab6-7586-488e-b302-d3bf641b56ab" containerName="keystone-bootstrap" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.919338 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.921478 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.921867 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.921946 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.928593 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.928949 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-sqzc6" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.930109 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 27 14:33:55 crc kubenswrapper[4729]: I0127 14:33:55.933042 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f67449f-t7hgq"] Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.032893 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-fernet-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.032971 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqv2\" (UniqueName: \"kubernetes.io/projected/66c77355-de9a-4aab-8d65-504c74911382-kube-api-access-mcqv2\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033158 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-combined-ca-bundle\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033271 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-credential-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033439 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-internal-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-public-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033670 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-scripts\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.033821 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-config-data\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.065814 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbe84ff4-e9ce-40bb-adea-426979dbd7c3" path="/var/lib/kubelet/pods/bbe84ff4-e9ce-40bb-adea-426979dbd7c3/volumes" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137231 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-combined-ca-bundle\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137320 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-credential-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137356 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-internal-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-public-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137458 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-scripts\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137726 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-config-data\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137886 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-fernet-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.137938 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqv2\" (UniqueName: \"kubernetes.io/projected/66c77355-de9a-4aab-8d65-504c74911382-kube-api-access-mcqv2\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.142741 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-credential-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.143234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-fernet-keys\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.145462 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-public-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.146383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-internal-tls-certs\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.146454 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-config-data\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.151683 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-combined-ca-bundle\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.158716 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66c77355-de9a-4aab-8d65-504c74911382-scripts\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.163441 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqv2\" (UniqueName: \"kubernetes.io/projected/66c77355-de9a-4aab-8d65-504c74911382-kube-api-access-mcqv2\") pod \"keystone-79f67449f-t7hgq\" (UID: \"66c77355-de9a-4aab-8d65-504c74911382\") " pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.238601 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.334267 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb844499b-jdr2d" event={"ID":"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2","Type":"ContainerStarted","Data":"233a8d37b00fa44f9eac8af87acb858de1a1c0ff8fcb51e838d60541797a75df"} Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.768050 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.768377 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.841290 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-79f67449f-t7hgq"] Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.844217 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.852482 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.864687 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:33:56 crc kubenswrapper[4729]: I0127 14:33:56.864748 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.045919 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.277205 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.353850 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerStarted","Data":"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116"} Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.379637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7cb844499b-jdr2d" event={"ID":"6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2","Type":"ContainerStarted","Data":"dbe5ed15571e3cc476b0a0623e405f60c02ff9c4e3b95bfb1cb3fa66a49ba896"} Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.380130 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.380324 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.385625 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f67449f-t7hgq" event={"ID":"66c77355-de9a-4aab-8d65-504c74911382","Type":"ContainerStarted","Data":"29cfefef34d407a1a968642c38b8991a8feacfc9b623ba6e9b31013dbade80ed"} Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.385673 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-79f67449f-t7hgq" event={"ID":"66c77355-de9a-4aab-8d65-504c74911382","Type":"ContainerStarted","Data":"2ad89f950cc2e59b1be25f76c545cd43478b83dd6d13fcd1d574b929e867f968"} Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.385952 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.397615 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6t9vl" event={"ID":"99c157a2-c017-4ace-bff5-50dfef32c990","Type":"ContainerStarted","Data":"224b7cab4db35f3a104d11bb9e59d9174f138764f01f9114485effdcea39f86b"} Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.397658 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.398171 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.398385 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.398414 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.415844 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7cb844499b-jdr2d" podStartSLOduration=7.415820889 podStartE2EDuration="7.415820889s" podCreationTimestamp="2026-01-27 14:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:57.411815268 +0000 UTC m=+1723.996006282" watchObservedRunningTime="2026-01-27 14:33:57.415820889 +0000 UTC m=+1724.000011893" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.439403 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-79f67449f-t7hgq" podStartSLOduration=2.43938424 podStartE2EDuration="2.43938424s" podCreationTimestamp="2026-01-27 14:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:33:57.436066249 +0000 UTC m=+1724.020257273" watchObservedRunningTime="2026-01-27 14:33:57.43938424 +0000 UTC m=+1724.023575244" Jan 27 14:33:57 crc kubenswrapper[4729]: I0127 14:33:57.478355 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6t9vl" podStartSLOduration=4.300580679 podStartE2EDuration="52.478333847s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="2026-01-27 14:33:07.984965203 +0000 UTC m=+1674.569156207" lastFinishedPulling="2026-01-27 14:33:56.162718371 +0000 UTC m=+1722.746909375" observedRunningTime="2026-01-27 14:33:57.461974415 +0000 UTC m=+1724.046165439" watchObservedRunningTime="2026-01-27 14:33:57.478333847 +0000 UTC m=+1724.062524851" Jan 27 14:33:58 crc kubenswrapper[4729]: I0127 14:33:58.428703 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f9tpr" event={"ID":"f7e4c134-9472-463f-b7be-226acbf7954b","Type":"ContainerStarted","Data":"7107d07701c55bd904425d30531252a639609fa7aec898ce377fada986bc5b1c"} Jan 27 14:33:58 crc kubenswrapper[4729]: I0127 14:33:58.464350 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-f9tpr" podStartSLOduration=4.117926298 podStartE2EDuration="53.464317959s" podCreationTimestamp="2026-01-27 14:33:05 +0000 UTC" firstStartedPulling="2026-01-27 14:33:07.354777949 +0000 UTC m=+1673.938968963" lastFinishedPulling="2026-01-27 14:33:56.70116962 +0000 UTC m=+1723.285360624" observedRunningTime="2026-01-27 14:33:58.453547151 +0000 UTC m=+1725.037738155" watchObservedRunningTime="2026-01-27 14:33:58.464317959 +0000 UTC m=+1725.048508963" Jan 27 14:33:59 crc kubenswrapper[4729]: I0127 14:33:59.051439 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:33:59 crc kubenswrapper[4729]: E0127 14:33:59.051805 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:33:59 crc kubenswrapper[4729]: I0127 14:33:59.441729 4729 generic.go:334] "Generic (PLEG): container finished" podID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" containerID="ac7329e4e741fe65f9695bd0e692d86682d0f6404db0ce56ab8d15f7a7ee265a" exitCode=0 Jan 27 14:33:59 crc kubenswrapper[4729]: I0127 14:33:59.442150 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:59 crc kubenswrapper[4729]: I0127 14:33:59.442160 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:33:59 crc kubenswrapper[4729]: I0127 14:33:59.442179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6cnrq" event={"ID":"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9","Type":"ContainerDied","Data":"ac7329e4e741fe65f9695bd0e692d86682d0f6404db0ce56ab8d15f7a7ee265a"} Jan 27 14:34:00 crc kubenswrapper[4729]: I0127 14:34:00.891555 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6cnrq" Jan 27 14:34:00 crc kubenswrapper[4729]: I0127 14:34:00.993715 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle\") pod \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " Jan 27 14:34:00 crc kubenswrapper[4729]: I0127 14:34:00.993809 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data\") pod \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " Jan 27 14:34:00 crc kubenswrapper[4729]: I0127 14:34:00.994208 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9jl\" (UniqueName: \"kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl\") pod \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\" (UID: \"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9\") " Jan 27 14:34:00 crc kubenswrapper[4729]: I0127 14:34:00.999637 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl" (OuterVolumeSpecName: "kube-api-access-8s9jl") pod "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" (UID: "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9"). InnerVolumeSpecName "kube-api-access-8s9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.047495 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" (UID: "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.096556 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s9jl\" (UniqueName: \"kubernetes.io/projected/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-kube-api-access-8s9jl\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.096597 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.105457 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data" (OuterVolumeSpecName: "config-data") pod "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" (UID: "d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.199219 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.472715 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6cnrq" event={"ID":"d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9","Type":"ContainerDied","Data":"dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481"} Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.472766 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7cfb1566e64eb962db45133875ad1c7243a0e5eab0c75ef87f973f5383f481" Jan 27 14:34:01 crc kubenswrapper[4729]: I0127 14:34:01.472744 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6cnrq" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.761770 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.762229 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.766047 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.766166 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.766293 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:34:02 crc kubenswrapper[4729]: I0127 14:34:02.770206 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:34:05 crc kubenswrapper[4729]: I0127 14:34:05.561828 4729 generic.go:334] "Generic (PLEG): container finished" podID="99c157a2-c017-4ace-bff5-50dfef32c990" containerID="224b7cab4db35f3a104d11bb9e59d9174f138764f01f9114485effdcea39f86b" exitCode=0 Jan 27 14:34:05 crc kubenswrapper[4729]: I0127 14:34:05.561943 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6t9vl" event={"ID":"99c157a2-c017-4ace-bff5-50dfef32c990","Type":"ContainerDied","Data":"224b7cab4db35f3a104d11bb9e59d9174f138764f01f9114485effdcea39f86b"} Jan 27 14:34:06 crc kubenswrapper[4729]: I0127 14:34:06.979518 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:34:07 crc kubenswrapper[4729]: E0127 14:34:07.065710 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.079971 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data\") pod \"99c157a2-c017-4ace-bff5-50dfef32c990\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.080071 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle\") pod \"99c157a2-c017-4ace-bff5-50dfef32c990\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.080133 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwj27\" (UniqueName: \"kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27\") pod \"99c157a2-c017-4ace-bff5-50dfef32c990\" (UID: \"99c157a2-c017-4ace-bff5-50dfef32c990\") " Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.085882 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "99c157a2-c017-4ace-bff5-50dfef32c990" (UID: "99c157a2-c017-4ace-bff5-50dfef32c990"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.086133 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27" (OuterVolumeSpecName: "kube-api-access-rwj27") pod "99c157a2-c017-4ace-bff5-50dfef32c990" (UID: "99c157a2-c017-4ace-bff5-50dfef32c990"). InnerVolumeSpecName "kube-api-access-rwj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.116175 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99c157a2-c017-4ace-bff5-50dfef32c990" (UID: "99c157a2-c017-4ace-bff5-50dfef32c990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.184523 4729 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.184839 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99c157a2-c017-4ace-bff5-50dfef32c990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.184851 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwj27\" (UniqueName: \"kubernetes.io/projected/99c157a2-c017-4ace-bff5-50dfef32c990-kube-api-access-rwj27\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.584749 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerStarted","Data":"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93"} Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.584908 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="ceilometer-notification-agent" containerID="cri-o://cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224" gracePeriod=30 Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.584969 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="proxy-httpd" containerID="cri-o://ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93" gracePeriod=30 Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.585030 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="sg-core" containerID="cri-o://afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116" gracePeriod=30 Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.584974 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.588699 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6t9vl" event={"ID":"99c157a2-c017-4ace-bff5-50dfef32c990","Type":"ContainerDied","Data":"e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52"} Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.588742 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96470e2ef7ae7e72860f2726cd0fce7bda5bb09147bd0e64e25b0c159075d52" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.588755 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6t9vl" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.847879 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8475c76cbc-gtz96"] Jan 27 14:34:07 crc kubenswrapper[4729]: E0127 14:34:07.848412 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" containerName="barbican-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.848437 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" containerName="barbican-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: E0127 14:34:07.848476 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" containerName="heat-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.848485 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" containerName="heat-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.848757 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" containerName="barbican-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.848787 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" containerName="heat-db-sync" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.850295 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.852428 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.852806 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.854354 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-46vg5" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.868180 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8475c76cbc-gtz96"] Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.903989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2v4\" (UniqueName: \"kubernetes.io/projected/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-kube-api-access-9k2v4\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.904047 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data-custom\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.904224 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-logs\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.904320 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-combined-ca-bundle\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.904361 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.978988 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-74fdd6f9c6-65ljj"] Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.982124 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:07 crc kubenswrapper[4729]: I0127 14:34:07.985848 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.006058 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2v4\" (UniqueName: \"kubernetes.io/projected/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-kube-api-access-9k2v4\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.006121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data-custom\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.006227 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-logs\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.006289 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-combined-ca-bundle\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.006322 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.008684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-logs\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.016490 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74fdd6f9c6-65ljj"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.024549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data-custom\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.024880 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-config-data\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.031804 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-combined-ca-bundle\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.047483 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2v4\" (UniqueName: \"kubernetes.io/projected/3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400-kube-api-access-9k2v4\") pod \"barbican-worker-8475c76cbc-gtz96\" (UID: \"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400\") " pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.100685 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.153130 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.157034 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6mq\" (UniqueName: \"kubernetes.io/projected/0c7b3357-e7e9-415b-8253-7ee68b4149a0-kube-api-access-dr6mq\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.161266 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-combined-ca-bundle\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.161464 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data-custom\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.161567 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7b3357-e7e9-415b-8253-7ee68b4149a0-logs\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.161888 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.187759 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8475c76cbc-gtz96" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.211778 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.234249 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.236673 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.240391 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264446 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6mq\" (UniqueName: \"kubernetes.io/projected/0c7b3357-e7e9-415b-8253-7ee68b4149a0-kube-api-access-dr6mq\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264552 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-combined-ca-bundle\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264615 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmj4\" (UniqueName: \"kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264653 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data-custom\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264672 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264687 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264711 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264750 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7b3357-e7e9-415b-8253-7ee68b4149a0-logs\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264809 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264887 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.264932 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.271038 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-combined-ca-bundle\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.271391 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data-custom\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.271905 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c7b3357-e7e9-415b-8253-7ee68b4149a0-logs\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.279577 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c7b3357-e7e9-415b-8253-7ee68b4149a0-config-data\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.287457 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6mq\" (UniqueName: \"kubernetes.io/projected/0c7b3357-e7e9-415b-8253-7ee68b4149a0-kube-api-access-dr6mq\") pod \"barbican-keystone-listener-74fdd6f9c6-65ljj\" (UID: \"0c7b3357-e7e9-415b-8253-7ee68b4149a0\") " pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.306174 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375442 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmj4\" (UniqueName: \"kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375488 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375518 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375534 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375558 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375619 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblmk\" (UniqueName: \"kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375642 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375675 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375716 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.375757 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.376457 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.376469 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.377097 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.377254 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.377695 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.391770 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.397926 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmj4\" (UniqueName: \"kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4\") pod \"dnsmasq-dns-7c67bffd47-vqhlm\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.477534 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblmk\" (UniqueName: \"kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.477597 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.477619 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.477731 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.477897 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.478424 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.485621 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.486150 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.498220 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.511940 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblmk\" (UniqueName: \"kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk\") pod \"barbican-api-77bbbcf5f4-pncs7\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.518437 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.609116 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.614694 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerID="ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93" exitCode=0 Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.614734 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerID="afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116" exitCode=2 Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.614784 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerDied","Data":"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93"} Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.614819 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerDied","Data":"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116"} Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.621504 4729 generic.go:334] "Generic (PLEG): container finished" podID="f7e4c134-9472-463f-b7be-226acbf7954b" containerID="7107d07701c55bd904425d30531252a639609fa7aec898ce377fada986bc5b1c" exitCode=0 Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.621577 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f9tpr" event={"ID":"f7e4c134-9472-463f-b7be-226acbf7954b","Type":"ContainerDied","Data":"7107d07701c55bd904425d30531252a639609fa7aec898ce377fada986bc5b1c"} Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.794941 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8475c76cbc-gtz96"] Jan 27 14:34:08 crc kubenswrapper[4729]: I0127 14:34:08.813738 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.072304 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-74fdd6f9c6-65ljj"] Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.241040 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:09 crc kubenswrapper[4729]: W0127 14:34:09.407494 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52e155b0_d550_44b3_b70c_515a97c03df3.slice/crio-2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4 WatchSource:0}: Error finding container 2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4: Status 404 returned error can't find the container with id 2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4 Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.410762 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.641001 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" event={"ID":"0c7b3357-e7e9-415b-8253-7ee68b4149a0","Type":"ContainerStarted","Data":"a1f216c1ae7bc57f127f099e605ceaf34008c10eef7c8453d463f90a3e164484"} Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.668158 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8475c76cbc-gtz96" event={"ID":"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400","Type":"ContainerStarted","Data":"df4053f4241e3cf2396d356a17df04a36cbfbbb3641e4fee2bf2f25a62147a3a"} Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.677572 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerStarted","Data":"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602"} Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.677807 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerStarted","Data":"2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4"} Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.708172 4729 generic.go:334] "Generic (PLEG): container finished" podID="17003381-d186-49a7-ae50-0ac0979c6d91" containerID="38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50" exitCode=0 Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.708336 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" event={"ID":"17003381-d186-49a7-ae50-0ac0979c6d91","Type":"ContainerDied","Data":"38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50"} Jan 27 14:34:09 crc kubenswrapper[4729]: I0127 14:34:09.708378 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" event={"ID":"17003381-d186-49a7-ae50-0ac0979c6d91","Type":"ContainerStarted","Data":"8e7233299a555231a36be2e41637361adc7d9bf32c00be9ec19ec508ab44a47f"} Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.400150 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535770 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535806 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535828 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535845 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67jhn\" (UniqueName: \"kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.535868 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id\") pod \"f7e4c134-9472-463f-b7be-226acbf7954b\" (UID: \"f7e4c134-9472-463f-b7be-226acbf7954b\") " Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.536539 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.545674 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn" (OuterVolumeSpecName: "kube-api-access-67jhn") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "kube-api-access-67jhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.548451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.573370 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts" (OuterVolumeSpecName: "scripts") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.586173 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.617954 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data" (OuterVolumeSpecName: "config-data") pod "f7e4c134-9472-463f-b7be-226acbf7954b" (UID: "f7e4c134-9472-463f-b7be-226acbf7954b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638806 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638849 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638861 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638874 4729 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f7e4c134-9472-463f-b7be-226acbf7954b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638889 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67jhn\" (UniqueName: \"kubernetes.io/projected/f7e4c134-9472-463f-b7be-226acbf7954b-kube-api-access-67jhn\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.638915 4729 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f7e4c134-9472-463f-b7be-226acbf7954b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.755337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" event={"ID":"17003381-d186-49a7-ae50-0ac0979c6d91","Type":"ContainerStarted","Data":"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6"} Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.756626 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.764797 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerStarted","Data":"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422"} Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.765624 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.765755 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.772826 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f9tpr" event={"ID":"f7e4c134-9472-463f-b7be-226acbf7954b","Type":"ContainerDied","Data":"403a5e4a35a0eaa941d3e667c37dbd789384968613294aec41ed3c07c6e988e4"} Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.772863 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403a5e4a35a0eaa941d3e667c37dbd789384968613294aec41ed3c07c6e988e4" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.772976 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f9tpr" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.838963 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" podStartSLOduration=3.838919957 podStartE2EDuration="3.838919957s" podCreationTimestamp="2026-01-27 14:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:10.786364935 +0000 UTC m=+1737.370555949" watchObservedRunningTime="2026-01-27 14:34:10.838919957 +0000 UTC m=+1737.423110971" Jan 27 14:34:10 crc kubenswrapper[4729]: I0127 14:34:10.885974 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podStartSLOduration=2.885954178 podStartE2EDuration="2.885954178s" podCreationTimestamp="2026-01-27 14:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:10.821598649 +0000 UTC m=+1737.405789653" watchObservedRunningTime="2026-01-27 14:34:10.885954178 +0000 UTC m=+1737.470145182" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.008701 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:11 crc kubenswrapper[4729]: E0127 14:34:11.009319 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" containerName="cinder-db-sync" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.009348 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" containerName="cinder-db-sync" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.009647 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" containerName="cinder-db-sync" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.011437 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.015732 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p57lw" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.016067 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.017064 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.017760 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.046345 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.156368 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192348 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192413 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192544 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192597 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vxs\" (UniqueName: \"kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.192865 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.193531 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.195872 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.233233 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.284002 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.287175 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.292385 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294604 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294669 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294804 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294820 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294866 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294892 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294944 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.294988 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vxs\" (UniqueName: \"kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.295013 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.295041 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.295057 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xplx6\" (UniqueName: \"kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.296190 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.303074 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.305379 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.305840 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.324521 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.327555 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vxs\" (UniqueName: \"kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs\") pod \"cinder-scheduler-0\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.327632 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.372087 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399326 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399385 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399417 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399436 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xplx6\" (UniqueName: \"kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399465 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399486 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399501 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbzm\" (UniqueName: \"kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399561 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399585 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399607 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399666 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399719 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.399739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.400534 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.400652 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.401112 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.402718 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.403600 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.422839 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xplx6\" (UniqueName: \"kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6\") pod \"dnsmasq-dns-5cc8b5d5c5-smj9b\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.502717 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.502774 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.502803 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.502898 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.503019 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.503098 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.503135 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbzm\" (UniqueName: \"kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.506744 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.507176 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.507658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.510346 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.510374 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.511646 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.520936 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbzm\" (UniqueName: \"kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm\") pod \"cinder-api-0\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " pod="openstack/cinder-api-0" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.526693 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:11 crc kubenswrapper[4729]: I0127 14:34:11.809155 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:12 crc kubenswrapper[4729]: I0127 14:34:12.848019 4729 generic.go:334] "Generic (PLEG): container finished" podID="36bba371-e800-414e-8523-51e905e6d074" containerID="8cbf1ffe2c846c7fddb8115a76cdfe2c3b5726529c1f610e94be15ff9e57dc17" exitCode=0 Jan 27 14:34:12 crc kubenswrapper[4729]: I0127 14:34:12.848709 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="dnsmasq-dns" containerID="cri-o://a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6" gracePeriod=10 Jan 27 14:34:12 crc kubenswrapper[4729]: I0127 14:34:12.849214 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dh77n" event={"ID":"36bba371-e800-414e-8523-51e905e6d074","Type":"ContainerDied","Data":"8cbf1ffe2c846c7fddb8115a76cdfe2c3b5726529c1f610e94be15ff9e57dc17"} Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.052254 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:34:13 crc kubenswrapper[4729]: E0127 14:34:13.052648 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.188869 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.557148 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.586765 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.706612 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.753226 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782164 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782493 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782616 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782681 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782717 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.782805 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjmj4\" (UniqueName: \"kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4\") pod \"17003381-d186-49a7-ae50-0ac0979c6d91\" (UID: \"17003381-d186-49a7-ae50-0ac0979c6d91\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.815688 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4" (OuterVolumeSpecName: "kube-api-access-wjmj4") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "kube-api-access-wjmj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.888879 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjmj4\" (UniqueName: \"kubernetes.io/projected/17003381-d186-49a7-ae50-0ac0979c6d91-kube-api-access-wjmj4\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.929422 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.965729 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerID="cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224" exitCode=0 Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.965820 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerDied","Data":"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224"} Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.965848 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b28343-b8b8-4b61-9c61-0003f8ca6556","Type":"ContainerDied","Data":"1e8c3490ed7fc07264fb17eea7a0185a5b81c8b2606b85c05807cc8973f5074b"} Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.965912 4729 scope.go:117] "RemoveContainer" containerID="ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.966112 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.981351 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerStarted","Data":"8618150c4dbb53e8eae3d1166119da8823fa6d061d0ba4991bae082a1b0b1d02"} Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.989319 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993516 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bzmr\" (UniqueName: \"kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993560 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993722 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993750 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993810 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993878 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.993911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml\") pod \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\" (UID: \"a9b28343-b8b8-4b61-9c61-0003f8ca6556\") " Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.994471 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.994487 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.999169 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.999189 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:13 crc kubenswrapper[4729]: I0127 14:34:13.999411 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr" (OuterVolumeSpecName: "kube-api-access-9bzmr") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "kube-api-access-9bzmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.000115 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.003865 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerStarted","Data":"1cf00e6a51c15e6707328a9eb869b262df11c0c0966d68461f3a4985d1bc3a56"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.008371 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.015656 4729 generic.go:334] "Generic (PLEG): container finished" podID="17003381-d186-49a7-ae50-0ac0979c6d91" containerID="a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6" exitCode=0 Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.015788 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.016132 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts" (OuterVolumeSpecName: "scripts") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.010106 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" event={"ID":"d9fece58-0c7c-43e3-9c50-bebf98bde9b0","Type":"ContainerStarted","Data":"5ba9a05433ceced384c93b44ac302e16083b8304cb295585c1a34f596b1eebcb"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.021401 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" event={"ID":"17003381-d186-49a7-ae50-0ac0979c6d91","Type":"ContainerDied","Data":"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.021475 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-vqhlm" event={"ID":"17003381-d186-49a7-ae50-0ac0979c6d91","Type":"ContainerDied","Data":"8e7233299a555231a36be2e41637361adc7d9bf32c00be9ec19ec508ab44a47f"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.038183 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" event={"ID":"0c7b3357-e7e9-415b-8253-7ee68b4149a0","Type":"ContainerStarted","Data":"30443e979df12d6ecb09a08ef9e93890badfe99987b6cd63d96f4819eda37c50"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.038458 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" event={"ID":"0c7b3357-e7e9-415b-8253-7ee68b4149a0","Type":"ContainerStarted","Data":"a437bcb15462c0b22c7801acdbee88c1f9979e2186f7642f4fe8ef480fa24c05"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.048119 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config" (OuterVolumeSpecName: "config") pod "17003381-d186-49a7-ae50-0ac0979c6d91" (UID: "17003381-d186-49a7-ae50-0ac0979c6d91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.086917 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.089937 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-74fdd6f9c6-65ljj" podStartSLOduration=3.567683179 podStartE2EDuration="7.089868194s" podCreationTimestamp="2026-01-27 14:34:07 +0000 UTC" firstStartedPulling="2026-01-27 14:34:09.065686309 +0000 UTC m=+1735.649877313" lastFinishedPulling="2026-01-27 14:34:12.587871324 +0000 UTC m=+1739.172062328" observedRunningTime="2026-01-27 14:34:14.084114155 +0000 UTC m=+1740.668305159" watchObservedRunningTime="2026-01-27 14:34:14.089868194 +0000 UTC m=+1740.674059198" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.100387 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bzmr\" (UniqueName: \"kubernetes.io/projected/a9b28343-b8b8-4b61-9c61-0003f8ca6556-kube-api-access-9bzmr\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101233 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101444 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101544 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101702 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101721 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b28343-b8b8-4b61-9c61-0003f8ca6556-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101730 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.101740 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17003381-d186-49a7-ae50-0ac0979c6d91-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.123145 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.165924 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8475c76cbc-gtz96" event={"ID":"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400","Type":"ContainerStarted","Data":"3be8f9fb664c55eedc8cdd34aa52b1d866a691905efa0b8af333af1e6df4cfff"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.165969 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8475c76cbc-gtz96" event={"ID":"3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400","Type":"ContainerStarted","Data":"46bb9fb481287cd3dd15fb118fbeb95773277b0f8b7306fb309db8c97feaee20"} Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.204340 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.213997 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data" (OuterVolumeSpecName: "config-data") pod "a9b28343-b8b8-4b61-9c61-0003f8ca6556" (UID: "a9b28343-b8b8-4b61-9c61-0003f8ca6556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.295177 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8475c76cbc-gtz96" podStartSLOduration=3.527420744 podStartE2EDuration="7.29515539s" podCreationTimestamp="2026-01-27 14:34:07 +0000 UTC" firstStartedPulling="2026-01-27 14:34:08.813502726 +0000 UTC m=+1735.397693730" lastFinishedPulling="2026-01-27 14:34:12.581237372 +0000 UTC m=+1739.165428376" observedRunningTime="2026-01-27 14:34:14.287656612 +0000 UTC m=+1740.871847636" watchObservedRunningTime="2026-01-27 14:34:14.29515539 +0000 UTC m=+1740.879346394" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.322087 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b28343-b8b8-4b61-9c61-0003f8ca6556-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.459177 4729 scope.go:117] "RemoveContainer" containerID="afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.508497 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.521347 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.540198 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.553164 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.554046 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="proxy-httpd" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554077 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="proxy-httpd" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.554102 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="dnsmasq-dns" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554114 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="dnsmasq-dns" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.554132 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="init" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554141 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="init" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.554153 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="sg-core" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554160 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="sg-core" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.554171 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="ceilometer-notification-agent" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554178 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="ceilometer-notification-agent" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554468 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="ceilometer-notification-agent" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554493 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="sg-core" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554507 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" containerName="dnsmasq-dns" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.554520 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" containerName="proxy-httpd" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.557157 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.562564 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.562798 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.570075 4729 scope.go:117] "RemoveContainer" containerID="cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.572179 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-vqhlm"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.604695 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.619712 4729 scope.go:117] "RemoveContainer" containerID="ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.620371 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93\": container with ID starting with ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93 not found: ID does not exist" containerID="ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.620429 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93"} err="failed to get container status \"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93\": rpc error: code = NotFound desc = could not find container \"ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93\": container with ID starting with ed4b851d8efabb8b28c05f8735e84c10301df0fb60daf54b17412b2bfd692c93 not found: ID does not exist" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.620464 4729 scope.go:117] "RemoveContainer" containerID="afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.621051 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116\": container with ID starting with afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116 not found: ID does not exist" containerID="afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.621082 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116"} err="failed to get container status \"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116\": rpc error: code = NotFound desc = could not find container \"afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116\": container with ID starting with afa73cdbab9bd2fd341d1caf3167f143370f9e5f93f50a99df5d2c4c6d19b116 not found: ID does not exist" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.621103 4729 scope.go:117] "RemoveContainer" containerID="cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.621386 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224\": container with ID starting with cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224 not found: ID does not exist" containerID="cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.621414 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224"} err="failed to get container status \"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224\": rpc error: code = NotFound desc = could not find container \"cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224\": container with ID starting with cb198ae6d67ce64d1df7ecb4fd5f150042d9bbfed8090554abdff4f43c7e6224 not found: ID does not exist" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.621431 4729 scope.go:117] "RemoveContainer" containerID="a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.679639 4729 scope.go:117] "RemoveContainer" containerID="38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737203 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737259 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737281 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737324 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737447 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737520 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt226\" (UniqueName: \"kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.737543 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.829563 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845638 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845730 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt226\" (UniqueName: \"kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845757 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845818 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845837 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.845953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.849629 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.850158 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.856361 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.858047 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.858482 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.859281 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.905633 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt226\" (UniqueName: \"kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226\") pod \"ceilometer-0\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " pod="openstack/ceilometer-0" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.982169 4729 scope.go:117] "RemoveContainer" containerID="a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.985180 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6\": container with ID starting with a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6 not found: ID does not exist" containerID="a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.985226 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6"} err="failed to get container status \"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6\": rpc error: code = NotFound desc = could not find container \"a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6\": container with ID starting with a1532fc7e9b99719daca411251ccadd2a6e384ffa3f53a70429e083d541c98f6 not found: ID does not exist" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.985256 4729 scope.go:117] "RemoveContainer" containerID="38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50" Jan 27 14:34:14 crc kubenswrapper[4729]: E0127 14:34:14.994117 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50\": container with ID starting with 38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50 not found: ID does not exist" containerID="38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50" Jan 27 14:34:14 crc kubenswrapper[4729]: I0127 14:34:14.994288 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50"} err="failed to get container status \"38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50\": rpc error: code = NotFound desc = could not find container \"38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50\": container with ID starting with 38a881b73644fe9b929082a7497ea63ad3bbba801a744d88c62e7793bb07be50 not found: ID does not exist" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.181314 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dh77n" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.181988 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dh77n" event={"ID":"36bba371-e800-414e-8523-51e905e6d074","Type":"ContainerDied","Data":"9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f"} Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.182076 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9187f33e53c7c80498b4447e9458f7ba02b76721f6f0508a10304dd4b535443f" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.196242 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.196585 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerStarted","Data":"ded643590c8d1476b913ae511f611b2e60b661bf718d06c08b75c22e7f3ddb2f"} Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.249474 4729 generic.go:334] "Generic (PLEG): container finished" podID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerID="6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8" exitCode=0 Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.249708 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" event={"ID":"d9fece58-0c7c-43e3-9c50-bebf98bde9b0","Type":"ContainerDied","Data":"6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8"} Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.258651 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4f9z\" (UniqueName: \"kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z\") pod \"36bba371-e800-414e-8523-51e905e6d074\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.276150 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle\") pod \"36bba371-e800-414e-8523-51e905e6d074\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.276417 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config\") pod \"36bba371-e800-414e-8523-51e905e6d074\" (UID: \"36bba371-e800-414e-8523-51e905e6d074\") " Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.305681 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z" (OuterVolumeSpecName: "kube-api-access-d4f9z") pod "36bba371-e800-414e-8523-51e905e6d074" (UID: "36bba371-e800-414e-8523-51e905e6d074"). InnerVolumeSpecName "kube-api-access-d4f9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.381621 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4f9z\" (UniqueName: \"kubernetes.io/projected/36bba371-e800-414e-8523-51e905e6d074-kube-api-access-d4f9z\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.426959 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36bba371-e800-414e-8523-51e905e6d074" (UID: "36bba371-e800-414e-8523-51e905e6d074"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.442026 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config" (OuterVolumeSpecName: "config") pod "36bba371-e800-414e-8523-51e905e6d074" (UID: "36bba371-e800-414e-8523-51e905e6d074"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.484356 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.484397 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/36bba371-e800-414e-8523-51e905e6d074-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.829017 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b46b856c4-72fkv"] Jan 27 14:34:15 crc kubenswrapper[4729]: E0127 14:34:15.829966 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36bba371-e800-414e-8523-51e905e6d074" containerName="neutron-db-sync" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.829982 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="36bba371-e800-414e-8523-51e905e6d074" containerName="neutron-db-sync" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.830238 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="36bba371-e800-414e-8523-51e905e6d074" containerName="neutron-db-sync" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.831374 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.834362 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.854581 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b46b856c4-72fkv"] Jan 27 14:34:15 crc kubenswrapper[4729]: I0127 14:34:15.855196 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.000962 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260a1ff1-928b-446f-9480-fb8d8fe342f1-logs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001034 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data-custom\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001064 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001100 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-public-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001123 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsndm\" (UniqueName: \"kubernetes.io/projected/260a1ff1-928b-446f-9480-fb8d8fe342f1-kube-api-access-fsndm\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001208 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-combined-ca-bundle\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.001266 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-internal-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.075254 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17003381-d186-49a7-ae50-0ac0979c6d91" path="/var/lib/kubelet/pods/17003381-d186-49a7-ae50-0ac0979c6d91/volumes" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.076530 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b28343-b8b8-4b61-9c61-0003f8ca6556" path="/var/lib/kubelet/pods/a9b28343-b8b8-4b61-9c61-0003f8ca6556/volumes" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.087031 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.104158 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-combined-ca-bundle\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.104275 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-internal-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.104396 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260a1ff1-928b-446f-9480-fb8d8fe342f1-logs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.104475 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data-custom\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.104514 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.105069 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-public-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.105156 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsndm\" (UniqueName: \"kubernetes.io/projected/260a1ff1-928b-446f-9480-fb8d8fe342f1-kube-api-access-fsndm\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.106715 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/260a1ff1-928b-446f-9480-fb8d8fe342f1-logs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: W0127 14:34:16.114197 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf766e8ed_e4f1_4fb1_8a92_cdacd0c86969.slice/crio-e752fb27cae7e7d04da4db8520619e5e5eefdc89b5bd888b58e23e51ebdd8eea WatchSource:0}: Error finding container e752fb27cae7e7d04da4db8520619e5e5eefdc89b5bd888b58e23e51ebdd8eea: Status 404 returned error can't find the container with id e752fb27cae7e7d04da4db8520619e5e5eefdc89b5bd888b58e23e51ebdd8eea Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.116017 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-public-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.117695 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.119773 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-combined-ca-bundle\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.123661 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-config-data-custom\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.131488 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/260a1ff1-928b-446f-9480-fb8d8fe342f1-internal-tls-certs\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.138636 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsndm\" (UniqueName: \"kubernetes.io/projected/260a1ff1-928b-446f-9480-fb8d8fe342f1-kube-api-access-fsndm\") pod \"barbican-api-5b46b856c4-72fkv\" (UID: \"260a1ff1-928b-446f-9480-fb8d8fe342f1\") " pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.207079 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.273475 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" event={"ID":"d9fece58-0c7c-43e3-9c50-bebf98bde9b0","Type":"ContainerStarted","Data":"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2"} Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.274862 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.289599 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dh77n" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.289988 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerStarted","Data":"e752fb27cae7e7d04da4db8520619e5e5eefdc89b5bd888b58e23e51ebdd8eea"} Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.307453 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" podStartSLOduration=5.307429668 podStartE2EDuration="5.307429668s" podCreationTimestamp="2026-01-27 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:16.303120479 +0000 UTC m=+1742.887311503" watchObservedRunningTime="2026-01-27 14:34:16.307429668 +0000 UTC m=+1742.891620672" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.639495 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.734843 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.762410 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.865083 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904399 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904521 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904569 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904694 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904770 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7qc\" (UniqueName: \"kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.904922 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.946946 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.948865 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.954049 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.954461 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.954666 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.955146 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gfn2v" Jan 27 14:34:16 crc kubenswrapper[4729]: I0127 14:34:16.986487 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012086 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012183 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012239 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012281 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd7qc\" (UniqueName: \"kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.012352 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.013900 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.014420 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.014919 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.014945 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.015398 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.063284 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd7qc\" (UniqueName: \"kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc\") pod \"dnsmasq-dns-6578955fd5-mg9rv\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.118487 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.118548 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.118622 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.118791 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntn7j\" (UniqueName: \"kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.118824 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.179557 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b46b856c4-72fkv"] Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.213337 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.226312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.226365 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.226416 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.226496 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntn7j\" (UniqueName: \"kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.226519 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.232598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.234869 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.239858 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.265869 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.275754 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntn7j\" (UniqueName: \"kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j\") pod \"neutron-bf97cd6d4-xv5bh\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.296613 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.377782 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerStarted","Data":"e278680905edc20897f5e3450ddec07a258ebeb1e0c3145525354855bb9092ba"} Jan 27 14:34:17 crc kubenswrapper[4729]: I0127 14:34:17.385741 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46b856c4-72fkv" event={"ID":"260a1ff1-928b-446f-9480-fb8d8fe342f1","Type":"ContainerStarted","Data":"ea45d742d4ea57a0efbd85b379f30031aeb36202d4f99eccae7fb009599fda48"} Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.432333 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46b856c4-72fkv" event={"ID":"260a1ff1-928b-446f-9480-fb8d8fe342f1","Type":"ContainerStarted","Data":"a8f7b434eb3040694654001a3bb6ad53fe3e140679ca08a5c04fc9d9da8225eb"} Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.463336 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="dnsmasq-dns" containerID="cri-o://e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2" gracePeriod=10 Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.463770 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api-log" containerID="cri-o://ded643590c8d1476b913ae511f611b2e60b661bf718d06c08b75c22e7f3ddb2f" gracePeriod=30 Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.463828 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerStarted","Data":"1c97ff8f810ae76f14fe13469078fce4df6b5d9c39cc661299c69eff3fd9ee73"} Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.464212 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api" containerID="cri-o://1c97ff8f810ae76f14fe13469078fce4df6b5d9c39cc661299c69eff3fd9ee73" gracePeriod=30 Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.464226 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.522461 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.522433111 podStartE2EDuration="7.522433111s" podCreationTimestamp="2026-01-27 14:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:18.502342375 +0000 UTC m=+1745.086533379" watchObservedRunningTime="2026-01-27 14:34:18.522433111 +0000 UTC m=+1745.106624115" Jan 27 14:34:18 crc kubenswrapper[4729]: I0127 14:34:18.650245 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.202291 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.394018 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431114 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431165 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431383 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431404 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431491 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.431577 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xplx6\" (UniqueName: \"kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6\") pod \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\" (UID: \"d9fece58-0c7c-43e3-9c50-bebf98bde9b0\") " Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.442502 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6" (OuterVolumeSpecName: "kube-api-access-xplx6") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "kube-api-access-xplx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.550842 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xplx6\" (UniqueName: \"kubernetes.io/projected/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-kube-api-access-xplx6\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.555039 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" event={"ID":"ef6b3230-207e-4f3b-b095-dae641faebef","Type":"ContainerStarted","Data":"942cb605ab964db4954782349787aa62e2c8445b9b4e5b4cbc393cd0404e2d62"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.585096 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerStarted","Data":"f71c75a9a33bc14c274bf08de5bdd505fd2d45da05f180722618e0fdf2b2b196"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.599335 4729 generic.go:334] "Generic (PLEG): container finished" podID="f6cccf10-226d-4521-950d-533735bf68ef" containerID="1c97ff8f810ae76f14fe13469078fce4df6b5d9c39cc661299c69eff3fd9ee73" exitCode=0 Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.599373 4729 generic.go:334] "Generic (PLEG): container finished" podID="f6cccf10-226d-4521-950d-533735bf68ef" containerID="ded643590c8d1476b913ae511f611b2e60b661bf718d06c08b75c22e7f3ddb2f" exitCode=143 Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.599452 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerDied","Data":"1c97ff8f810ae76f14fe13469078fce4df6b5d9c39cc661299c69eff3fd9ee73"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.599489 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerDied","Data":"ded643590c8d1476b913ae511f611b2e60b661bf718d06c08b75c22e7f3ddb2f"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.624325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerStarted","Data":"3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.630331 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.658589 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.668323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.672492 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.90644915 podStartE2EDuration="9.672463588s" podCreationTimestamp="2026-01-27 14:34:10 +0000 UTC" firstStartedPulling="2026-01-27 14:34:13.238570786 +0000 UTC m=+1739.822761790" lastFinishedPulling="2026-01-27 14:34:15.004585214 +0000 UTC m=+1741.588776228" observedRunningTime="2026-01-27 14:34:19.657219376 +0000 UTC m=+1746.241410390" watchObservedRunningTime="2026-01-27 14:34:19.672463588 +0000 UTC m=+1746.256654592" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.679125 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerStarted","Data":"00fb40e2bbd07d3d86a607dfdd693d49807da2ddaf069950178a36331bb05963"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.715286 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.716832 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.718229 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.718507 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config" (OuterVolumeSpecName: "config") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.752991 4729 generic.go:334] "Generic (PLEG): container finished" podID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerID="e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2" exitCode=0 Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.753074 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" event={"ID":"d9fece58-0c7c-43e3-9c50-bebf98bde9b0","Type":"ContainerDied","Data":"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.753106 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" event={"ID":"d9fece58-0c7c-43e3-9c50-bebf98bde9b0","Type":"ContainerDied","Data":"5ba9a05433ceced384c93b44ac302e16083b8304cb295585c1a34f596b1eebcb"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.753126 4729 scope.go:117] "RemoveContainer" containerID="e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.753253 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-smj9b" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.769950 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.769991 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.770001 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.773135 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46b856c4-72fkv" event={"ID":"260a1ff1-928b-446f-9480-fb8d8fe342f1","Type":"ContainerStarted","Data":"b37b3bf055f4481c6c644e4ae473216496433159f05fabda632490c978cbd513"} Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.774171 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.774248 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.804543 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d9fece58-0c7c-43e3-9c50-bebf98bde9b0" (UID: "d9fece58-0c7c-43e3-9c50-bebf98bde9b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.844839 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b46b856c4-72fkv" podStartSLOduration=4.844816873 podStartE2EDuration="4.844816873s" podCreationTimestamp="2026-01-27 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:19.83925929 +0000 UTC m=+1746.423450294" watchObservedRunningTime="2026-01-27 14:34:19.844816873 +0000 UTC m=+1746.429007877" Jan 27 14:34:19 crc kubenswrapper[4729]: I0127 14:34:19.872727 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d9fece58-0c7c-43e3-9c50-bebf98bde9b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.150360 4729 scope.go:117] "RemoveContainer" containerID="6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.187438 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.213859 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-smj9b"] Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.381169 4729 scope.go:117] "RemoveContainer" containerID="e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2" Jan 27 14:34:20 crc kubenswrapper[4729]: E0127 14:34:20.381944 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2\": container with ID starting with e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2 not found: ID does not exist" containerID="e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.381985 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2"} err="failed to get container status \"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2\": rpc error: code = NotFound desc = could not find container \"e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2\": container with ID starting with e93f24d4dc5a42742260c9026423816325a5b0125b8c26268e579be149b11ec2 not found: ID does not exist" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.382013 4729 scope.go:117] "RemoveContainer" containerID="6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8" Jan 27 14:34:20 crc kubenswrapper[4729]: E0127 14:34:20.382530 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8\": container with ID starting with 6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8 not found: ID does not exist" containerID="6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.382588 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8"} err="failed to get container status \"6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8\": rpc error: code = NotFound desc = could not find container \"6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8\": container with ID starting with 6235c37fa42d7f99c88a4725ba6e80a126d57cc71270f4ff281b468a683f40e8 not found: ID does not exist" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.427240 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513382 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513476 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513527 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513558 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513744 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513819 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsbzm\" (UniqueName: \"kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.513858 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts\") pod \"f6cccf10-226d-4521-950d-533735bf68ef\" (UID: \"f6cccf10-226d-4521-950d-533735bf68ef\") " Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.518234 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts" (OuterVolumeSpecName: "scripts") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.524930 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.525240 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs" (OuterVolumeSpecName: "logs") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.525290 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.530190 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm" (OuterVolumeSpecName: "kube-api-access-fsbzm") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "kube-api-access-fsbzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.613095 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618713 4729 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f6cccf10-226d-4521-950d-533735bf68ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618777 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsbzm\" (UniqueName: \"kubernetes.io/projected/f6cccf10-226d-4521-950d-533735bf68ef-kube-api-access-fsbzm\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618793 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618804 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6cccf10-226d-4521-950d-533735bf68ef-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618815 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.618825 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.636734 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data" (OuterVolumeSpecName: "config-data") pod "f6cccf10-226d-4521-950d-533735bf68ef" (UID: "f6cccf10-226d-4521-950d-533735bf68ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.720603 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cccf10-226d-4521-950d-533735bf68ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.786088 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerStarted","Data":"eaa41ab6f4f516d8673a10a1637de426722cb259bcfc15ac0e410692f05be989"} Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.788910 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerStarted","Data":"9be07918cfb846752e460e6834b8e23657061bac59a7068f67832dba9247d706"} Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.794008 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef6b3230-207e-4f3b-b095-dae641faebef" containerID="3e322e83c434c8d0c7cbe6a175816d709ac3e10c3c041485fdf04e88c8f64bae" exitCode=0 Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.794120 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" event={"ID":"ef6b3230-207e-4f3b-b095-dae641faebef","Type":"ContainerDied","Data":"3e322e83c434c8d0c7cbe6a175816d709ac3e10c3c041485fdf04e88c8f64bae"} Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.797373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f6cccf10-226d-4521-950d-533735bf68ef","Type":"ContainerDied","Data":"1cf00e6a51c15e6707328a9eb869b262df11c0c0966d68461f3a4985d1bc3a56"} Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.797461 4729 scope.go:117] "RemoveContainer" containerID="1c97ff8f810ae76f14fe13469078fce4df6b5d9c39cc661299c69eff3fd9ee73" Jan 27 14:34:20 crc kubenswrapper[4729]: I0127 14:34:20.797510 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.063189 4729 scope.go:117] "RemoveContainer" containerID="ded643590c8d1476b913ae511f611b2e60b661bf718d06c08b75c22e7f3ddb2f" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.105940 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.131610 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.154383 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:21 crc kubenswrapper[4729]: E0127 14:34:21.155204 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="init" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155226 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="init" Jan 27 14:34:21 crc kubenswrapper[4729]: E0127 14:34:21.155294 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api-log" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155303 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api-log" Jan 27 14:34:21 crc kubenswrapper[4729]: E0127 14:34:21.155318 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="dnsmasq-dns" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155328 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="dnsmasq-dns" Jan 27 14:34:21 crc kubenswrapper[4729]: E0127 14:34:21.155343 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155350 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155581 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" containerName="dnsmasq-dns" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155607 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.155630 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cccf10-226d-4521-950d-533735bf68ef" containerName="cinder-api-log" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.157124 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.166577 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.167011 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.167131 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.182323 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346153 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346490 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346531 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-scripts\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346645 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28e34a55-2a5a-4da3-8f4e-ece70df636e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346759 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e34a55-2a5a-4da3-8f4e-ece70df636e2-logs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346846 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.346963 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.347021 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p9jv\" (UniqueName: \"kubernetes.io/projected/28e34a55-2a5a-4da3-8f4e-ece70df636e2-kube-api-access-7p9jv\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.373647 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.383369 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.199:8080/\": dial tcp 10.217.0.199:8080: connect: connection refused" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.449773 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.449926 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28e34a55-2a5a-4da3-8f4e-ece70df636e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.449965 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e34a55-2a5a-4da3-8f4e-ece70df636e2-logs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450054 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450225 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450332 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9jv\" (UniqueName: \"kubernetes.io/projected/28e34a55-2a5a-4da3-8f4e-ece70df636e2-kube-api-access-7p9jv\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450376 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450413 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.450469 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-scripts\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.457027 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/28e34a55-2a5a-4da3-8f4e-ece70df636e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.457436 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28e34a55-2a5a-4da3-8f4e-ece70df636e2-logs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.461561 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-scripts\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.462347 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.477605 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9jv\" (UniqueName: \"kubernetes.io/projected/28e34a55-2a5a-4da3-8f4e-ece70df636e2-kube-api-access-7p9jv\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.483575 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.486404 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.487665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.488382 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28e34a55-2a5a-4da3-8f4e-ece70df636e2-config-data\") pod \"cinder-api-0\" (UID: \"28e34a55-2a5a-4da3-8f4e-ece70df636e2\") " pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.502421 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.851460 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" event={"ID":"ef6b3230-207e-4f3b-b095-dae641faebef","Type":"ContainerStarted","Data":"f0b2ae95927b5264693f1c8e2510a1e221f2de9011dce389616509879143eb00"} Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.851781 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.910786 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" podStartSLOduration=5.910763296 podStartE2EDuration="5.910763296s" podCreationTimestamp="2026-01-27 14:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:21.906697353 +0000 UTC m=+1748.490888357" watchObservedRunningTime="2026-01-27 14:34:21.910763296 +0000 UTC m=+1748.494954300" Jan 27 14:34:21 crc kubenswrapper[4729]: I0127 14:34:21.962080 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerStarted","Data":"37c8bb492c96573e7bd89e066c94f79053eaeb46fb84d6480b56c9bdd299adc9"} Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.019822 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerStarted","Data":"1cf591707b8587dcc2a35be6612e08e49d29952c42ff3446d035a84b14a7e76c"} Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.019930 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.053117 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85dcfc7bf5-fs787"] Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.055957 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.066179 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.066566 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.095016 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bf97cd6d4-xv5bh" podStartSLOduration=6.094992809 podStartE2EDuration="6.094992809s" podCreationTimestamp="2026-01-27 14:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:22.065605956 +0000 UTC m=+1748.649796960" watchObservedRunningTime="2026-01-27 14:34:22.094992809 +0000 UTC m=+1748.679183813" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097616 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-public-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097706 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scz52\" (UniqueName: \"kubernetes.io/projected/8b358632-8eef-4842-91bc-9c69460a5dea-kube-api-access-scz52\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-combined-ca-bundle\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097754 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-internal-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-ovndb-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.097869 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-httpd-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.098060 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.139644 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9fece58-0c7c-43e3-9c50-bebf98bde9b0" path="/var/lib/kubelet/pods/d9fece58-0c7c-43e3-9c50-bebf98bde9b0/volumes" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.171101 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cccf10-226d-4521-950d-533735bf68ef" path="/var/lib/kubelet/pods/f6cccf10-226d-4521-950d-533735bf68ef/volumes" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.171988 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dcfc7bf5-fs787"] Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203199 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scz52\" (UniqueName: \"kubernetes.io/projected/8b358632-8eef-4842-91bc-9c69460a5dea-kube-api-access-scz52\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203264 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-combined-ca-bundle\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203302 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-internal-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203361 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-ovndb-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203387 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-httpd-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203433 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.203502 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-public-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.213154 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-public-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.218849 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-httpd-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.219791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-ovndb-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.222839 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-config\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.223549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-internal-tls-certs\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.229784 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b358632-8eef-4842-91bc-9c69460a5dea-combined-ca-bundle\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.235283 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scz52\" (UniqueName: \"kubernetes.io/projected/8b358632-8eef-4842-91bc-9c69460a5dea-kube-api-access-scz52\") pod \"neutron-85dcfc7bf5-fs787\" (UID: \"8b358632-8eef-4842-91bc-9c69460a5dea\") " pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.438743 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:22 crc kubenswrapper[4729]: I0127 14:34:22.522865 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 27 14:34:23 crc kubenswrapper[4729]: I0127 14:34:23.114990 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28e34a55-2a5a-4da3-8f4e-ece70df636e2","Type":"ContainerStarted","Data":"10b7a281bfed70198e19667e7655f37cda061fe5d0a9c11c664125ffd182caa6"} Jan 27 14:34:23 crc kubenswrapper[4729]: I0127 14:34:23.418299 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85dcfc7bf5-fs787"] Jan 27 14:34:23 crc kubenswrapper[4729]: W0127 14:34:23.440622 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b358632_8eef_4842_91bc_9c69460a5dea.slice/crio-9d33285b8bdba2aff9a8c6456675678b120f15af52d634060a0fac4c621ed8ee WatchSource:0}: Error finding container 9d33285b8bdba2aff9a8c6456675678b120f15af52d634060a0fac4c621ed8ee: Status 404 returned error can't find the container with id 9d33285b8bdba2aff9a8c6456675678b120f15af52d634060a0fac4c621ed8ee Jan 27 14:34:23 crc kubenswrapper[4729]: I0127 14:34:23.606840 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:23 crc kubenswrapper[4729]: I0127 14:34:23.653828 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:23 crc kubenswrapper[4729]: I0127 14:34:23.681197 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:24 crc kubenswrapper[4729]: I0127 14:34:24.189249 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dcfc7bf5-fs787" event={"ID":"8b358632-8eef-4842-91bc-9c69460a5dea","Type":"ContainerStarted","Data":"9d33285b8bdba2aff9a8c6456675678b120f15af52d634060a0fac4c621ed8ee"} Jan 27 14:34:24 crc kubenswrapper[4729]: I0127 14:34:24.699369 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:34:24 crc kubenswrapper[4729]: I0127 14:34:24.761061 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.227310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28e34a55-2a5a-4da3-8f4e-ece70df636e2","Type":"ContainerStarted","Data":"96221e7e4c402eed3578b0c18372ce4671d46cbbf2fe0c63ce0c3c5beb004e87"} Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.244278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerStarted","Data":"a8b637b648095314f70a195afed3cdca299eb6d3d85459853a268076e9eb92fd"} Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.244588 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.253705 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dcfc7bf5-fs787" event={"ID":"8b358632-8eef-4842-91bc-9c69460a5dea","Type":"ContainerStarted","Data":"f6a331932ada0ef28a758cb4ec1c5047fcb391673d9b2d1df645fbe699df8d08"} Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.253755 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85dcfc7bf5-fs787" event={"ID":"8b358632-8eef-4842-91bc-9c69460a5dea","Type":"ContainerStarted","Data":"64121d26d34208c560e2d968caeec317b30a45051ae1dd4ec65c2ae07efdfbbe"} Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.253993 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.310543 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.6862479710000002 podStartE2EDuration="11.310519136s" podCreationTimestamp="2026-01-27 14:34:14 +0000 UTC" firstStartedPulling="2026-01-27 14:34:16.122566746 +0000 UTC m=+1742.706757750" lastFinishedPulling="2026-01-27 14:34:23.746837911 +0000 UTC m=+1750.331028915" observedRunningTime="2026-01-27 14:34:25.281313119 +0000 UTC m=+1751.865504133" watchObservedRunningTime="2026-01-27 14:34:25.310519136 +0000 UTC m=+1751.894710150" Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.332380 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85dcfc7bf5-fs787" podStartSLOduration=4.33235687 podStartE2EDuration="4.33235687s" podCreationTimestamp="2026-01-27 14:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:25.304687505 +0000 UTC m=+1751.888878529" watchObservedRunningTime="2026-01-27 14:34:25.33235687 +0000 UTC m=+1751.916547874" Jan 27 14:34:25 crc kubenswrapper[4729]: I0127 14:34:25.449726 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7cb844499b-jdr2d" Jan 27 14:34:26 crc kubenswrapper[4729]: I0127 14:34:26.268179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"28e34a55-2a5a-4da3-8f4e-ece70df636e2","Type":"ContainerStarted","Data":"b1ced7ef08290c94727e93091ffe6752d8f6f1e098dd079602cbf9a9ae55ded6"} Jan 27 14:34:26 crc kubenswrapper[4729]: I0127 14:34:26.304943 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.30491756 podStartE2EDuration="5.30491756s" podCreationTimestamp="2026-01-27 14:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:26.293060513 +0000 UTC m=+1752.877251517" watchObservedRunningTime="2026-01-27 14:34:26.30491756 +0000 UTC m=+1752.889108584" Jan 27 14:34:26 crc kubenswrapper[4729]: I0127 14:34:26.503040 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 27 14:34:26 crc kubenswrapper[4729]: I0127 14:34:26.722282 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:34:26 crc kubenswrapper[4729]: I0127 14:34:26.774308 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:27 crc kubenswrapper[4729]: I0127 14:34:27.216044 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:34:27 crc kubenswrapper[4729]: I0127 14:34:27.290130 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="cinder-scheduler" containerID="cri-o://e278680905edc20897f5e3450ddec07a258ebeb1e0c3145525354855bb9092ba" gracePeriod=30 Jan 27 14:34:27 crc kubenswrapper[4729]: I0127 14:34:27.290498 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="probe" containerID="cri-o://f71c75a9a33bc14c274bf08de5bdd505fd2d45da05f180722618e0fdf2b2b196" gracePeriod=30 Jan 27 14:34:27 crc kubenswrapper[4729]: I0127 14:34:27.399060 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:34:27 crc kubenswrapper[4729]: I0127 14:34:27.399528 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="dnsmasq-dns" containerID="cri-o://356261c680d35fde5ab345835cf1d45e880811180881256176404acaa9f35d70" gracePeriod=10 Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.054699 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:34:28 crc kubenswrapper[4729]: E0127 14:34:28.055646 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.320650 4729 generic.go:334] "Generic (PLEG): container finished" podID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerID="356261c680d35fde5ab345835cf1d45e880811180881256176404acaa9f35d70" exitCode=0 Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.320750 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" event={"ID":"cfa8c070-9c22-42d8-beee-59f6cda90fb0","Type":"ContainerDied","Data":"356261c680d35fde5ab345835cf1d45e880811180881256176404acaa9f35d70"} Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.321247 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" event={"ID":"cfa8c070-9c22-42d8-beee-59f6cda90fb0","Type":"ContainerDied","Data":"ef853e21941c75f697b90e3b69c0b91993d986d0a086299ce4d88622dd5acc3e"} Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.321283 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef853e21941c75f697b90e3b69c0b91993d986d0a086299ce4d88622dd5acc3e" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.378358 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.484089 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.488054 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.488109 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.488293 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.488378 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.488428 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hrs9\" (UniqueName: \"kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9\") pod \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\" (UID: \"cfa8c070-9c22-42d8-beee-59f6cda90fb0\") " Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.498218 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.559168 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9" (OuterVolumeSpecName: "kube-api-access-5hrs9") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "kube-api-access-5hrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.591611 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hrs9\" (UniqueName: \"kubernetes.io/projected/cfa8c070-9c22-42d8-beee-59f6cda90fb0-kube-api-access-5hrs9\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.654935 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.670468 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.681486 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.694614 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.694652 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.694662 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.700502 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.700927 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config" (OuterVolumeSpecName: "config") pod "cfa8c070-9c22-42d8-beee-59f6cda90fb0" (UID: "cfa8c070-9c22-42d8-beee-59f6cda90fb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.797389 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:28 crc kubenswrapper[4729]: I0127 14:34:28.797436 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfa8c070-9c22-42d8-beee-59f6cda90fb0-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.336864 4729 generic.go:334] "Generic (PLEG): container finished" podID="d2ef21af-221b-489b-9a82-3a3626e73911" containerID="f71c75a9a33bc14c274bf08de5bdd505fd2d45da05f180722618e0fdf2b2b196" exitCode=0 Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.336940 4729 generic.go:334] "Generic (PLEG): container finished" podID="d2ef21af-221b-489b-9a82-3a3626e73911" containerID="e278680905edc20897f5e3450ddec07a258ebeb1e0c3145525354855bb9092ba" exitCode=0 Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.336967 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerDied","Data":"f71c75a9a33bc14c274bf08de5bdd505fd2d45da05f180722618e0fdf2b2b196"} Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.337036 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-s8s8d" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.337050 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerDied","Data":"e278680905edc20897f5e3450ddec07a258ebeb1e0c3145525354855bb9092ba"} Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.337073 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d2ef21af-221b-489b-9a82-3a3626e73911","Type":"ContainerDied","Data":"8618150c4dbb53e8eae3d1166119da8823fa6d061d0ba4991bae082a1b0b1d02"} Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.337089 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8618150c4dbb53e8eae3d1166119da8823fa6d061d0ba4991bae082a1b0b1d02" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.394488 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.423117 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.445512 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-s8s8d"] Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515320 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515540 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515691 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4vxs\" (UniqueName: \"kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515737 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.515760 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom\") pod \"d2ef21af-221b-489b-9a82-3a3626e73911\" (UID: \"d2ef21af-221b-489b-9a82-3a3626e73911\") " Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.516999 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.536081 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.536126 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs" (OuterVolumeSpecName: "kube-api-access-l4vxs") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "kube-api-access-l4vxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.537037 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts" (OuterVolumeSpecName: "scripts") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.619700 4729 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d2ef21af-221b-489b-9a82-3a3626e73911-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.619747 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4vxs\" (UniqueName: \"kubernetes.io/projected/d2ef21af-221b-489b-9a82-3a3626e73911-kube-api-access-l4vxs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.619760 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.619770 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.681250 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.719261 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data" (OuterVolumeSpecName: "config-data") pod "d2ef21af-221b-489b-9a82-3a3626e73911" (UID: "d2ef21af-221b-489b-9a82-3a3626e73911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.721550 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.721577 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ef21af-221b-489b-9a82-3a3626e73911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:29 crc kubenswrapper[4729]: I0127 14:34:29.761847 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-79f67449f-t7hgq" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.040047 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b46b856c4-72fkv" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.075833 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" path="/var/lib/kubelet/pods/cfa8c070-9c22-42d8-beee-59f6cda90fb0/volumes" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.112617 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.112820 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" containerID="cri-o://88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602" gracePeriod=30 Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.113321 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" containerID="cri-o://8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422" gracePeriod=30 Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.371414 4729 generic.go:334] "Generic (PLEG): container finished" podID="52e155b0-d550-44b3-b70c-515a97c03df3" containerID="88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602" exitCode=143 Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.371735 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerDied","Data":"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602"} Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.371851 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.409015 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.430933 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.445221 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:30 crc kubenswrapper[4729]: E0127 14:34:30.445840 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="init" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.445865 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="init" Jan 27 14:34:30 crc kubenswrapper[4729]: E0127 14:34:30.445896 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="dnsmasq-dns" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.445906 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="dnsmasq-dns" Jan 27 14:34:30 crc kubenswrapper[4729]: E0127 14:34:30.445930 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="probe" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.445941 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="probe" Jan 27 14:34:30 crc kubenswrapper[4729]: E0127 14:34:30.445992 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="cinder-scheduler" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.446002 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="cinder-scheduler" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.446294 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa8c070-9c22-42d8-beee-59f6cda90fb0" containerName="dnsmasq-dns" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.446325 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="cinder-scheduler" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.446350 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" containerName="probe" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.447827 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.453063 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.467264 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544437 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544531 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544567 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544610 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544653 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5fx\" (UniqueName: \"kubernetes.io/projected/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-kube-api-access-2f5fx\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.544737 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.646986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.647479 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.647669 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.647781 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.647961 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.647984 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.648210 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5fx\" (UniqueName: \"kubernetes.io/projected/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-kube-api-access-2f5fx\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.652827 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-scripts\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.654219 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.656613 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.656694 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-config-data\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.671167 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5fx\" (UniqueName: \"kubernetes.io/projected/c2742dbf-31d5-4550-88df-d1b01e4f7dc4-kube-api-access-2f5fx\") pod \"cinder-scheduler-0\" (UID: \"c2742dbf-31d5-4550-88df-d1b01e4f7dc4\") " pod="openstack/cinder-scheduler-0" Jan 27 14:34:30 crc kubenswrapper[4729]: I0127 14:34:30.780402 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 27 14:34:31 crc kubenswrapper[4729]: I0127 14:34:31.505823 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 27 14:34:32 crc kubenswrapper[4729]: I0127 14:34:32.082354 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ef21af-221b-489b-9a82-3a3626e73911" path="/var/lib/kubelet/pods/d2ef21af-221b-489b-9a82-3a3626e73911/volumes" Jan 27 14:34:32 crc kubenswrapper[4729]: I0127 14:34:32.406531 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2742dbf-31d5-4550-88df-d1b01e4f7dc4","Type":"ContainerStarted","Data":"4d466a1b63daeeab633395e0ba39aa4414c3de0a5fd3e901dacc49f48fd32515"} Jan 27 14:34:32 crc kubenswrapper[4729]: I0127 14:34:32.406582 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2742dbf-31d5-4550-88df-d1b01e4f7dc4","Type":"ContainerStarted","Data":"f5fd261a01314d88ec6167ec7e295333ec860568ce60f4754d5588785a2a2315"} Jan 27 14:34:33 crc kubenswrapper[4729]: I0127 14:34:33.420241 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c2742dbf-31d5-4550-88df-d1b01e4f7dc4","Type":"ContainerStarted","Data":"df252e5f25874ea8bbfb2fe391bd472af6d9a0935d5b1539d1ea301f4885af1e"} Jan 27 14:34:33 crc kubenswrapper[4729]: I0127 14:34:33.449587 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.449568335 podStartE2EDuration="3.449568335s" podCreationTimestamp="2026-01-27 14:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:33.444303869 +0000 UTC m=+1760.028494883" watchObservedRunningTime="2026-01-27 14:34:33.449568335 +0000 UTC m=+1760.033759339" Jan 27 14:34:33 crc kubenswrapper[4729]: I0127 14:34:33.621118 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": dial tcp 10.217.0.198:9311: connect: connection refused" Jan 27 14:34:33 crc kubenswrapper[4729]: I0127 14:34:33.621231 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-77bbbcf5f4-pncs7" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": dial tcp 10.217.0.198:9311: connect: connection refused" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.098414 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.100102 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.111519 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-fxfbd" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.111777 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.116329 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.117223 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.149651 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.150111 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.150189 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.152662 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8s8d\" (UniqueName: \"kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.257002 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.257063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.257117 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8s8d\" (UniqueName: \"kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.257165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.259156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.264259 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.264438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.291357 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8s8d\" (UniqueName: \"kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d\") pod \"openstackclient\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.422100 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.434920 4729 generic.go:334] "Generic (PLEG): container finished" podID="52e155b0-d550-44b3-b70c-515a97c03df3" containerID="8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422" exitCode=0 Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.436034 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77bbbcf5f4-pncs7" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.436078 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerDied","Data":"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422"} Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.436155 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77bbbcf5f4-pncs7" event={"ID":"52e155b0-d550-44b3-b70c-515a97c03df3","Type":"ContainerDied","Data":"2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4"} Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.436174 4729 scope.go:117] "RemoveContainer" containerID="8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.456160 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.462089 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data\") pod \"52e155b0-d550-44b3-b70c-515a97c03df3\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.462395 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle\") pod \"52e155b0-d550-44b3-b70c-515a97c03df3\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.462475 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom\") pod \"52e155b0-d550-44b3-b70c-515a97c03df3\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.462522 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mblmk\" (UniqueName: \"kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk\") pod \"52e155b0-d550-44b3-b70c-515a97c03df3\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.462558 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs\") pod \"52e155b0-d550-44b3-b70c-515a97c03df3\" (UID: \"52e155b0-d550-44b3-b70c-515a97c03df3\") " Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.468494 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs" (OuterVolumeSpecName: "logs") pod "52e155b0-d550-44b3-b70c-515a97c03df3" (UID: "52e155b0-d550-44b3-b70c-515a97c03df3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.474142 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk" (OuterVolumeSpecName: "kube-api-access-mblmk") pod "52e155b0-d550-44b3-b70c-515a97c03df3" (UID: "52e155b0-d550-44b3-b70c-515a97c03df3"). InnerVolumeSpecName "kube-api-access-mblmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.484599 4729 scope.go:117] "RemoveContainer" containerID="88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.491869 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52e155b0-d550-44b3-b70c-515a97c03df3" (UID: "52e155b0-d550-44b3-b70c-515a97c03df3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.565200 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.575382 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.575430 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mblmk\" (UniqueName: \"kubernetes.io/projected/52e155b0-d550-44b3-b70c-515a97c03df3-kube-api-access-mblmk\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.575448 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52e155b0-d550-44b3-b70c-515a97c03df3-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.583277 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52e155b0-d550-44b3-b70c-515a97c03df3" (UID: "52e155b0-d550-44b3-b70c-515a97c03df3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.634305 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.647586 4729 scope.go:117] "RemoveContainer" containerID="8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422" Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.653163 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422\": container with ID starting with 8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422 not found: ID does not exist" containerID="8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.653214 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422"} err="failed to get container status \"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422\": rpc error: code = NotFound desc = could not find container \"8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422\": container with ID starting with 8c73e4fa470f31b297ebc9f444f55a9a16b36af6d34d7c4a073f46d611de2422 not found: ID does not exist" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.653241 4729 scope.go:117] "RemoveContainer" containerID="88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602" Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.655113 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602\": container with ID starting with 88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602 not found: ID does not exist" containerID="88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.655293 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602"} err="failed to get container status \"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602\": rpc error: code = NotFound desc = could not find container \"88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602\": container with ID starting with 88ecfa4a690b05e100bdb6c1695f0101566295fb76933bcba490788b714ac602 not found: ID does not exist" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.657262 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data" (OuterVolumeSpecName: "config-data") pod "52e155b0-d550-44b3-b70c-515a97c03df3" (UID: "52e155b0-d550-44b3-b70c-515a97c03df3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.666524 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.667061 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.667076 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.667117 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.667126 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.667372 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.667384 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" containerName="barbican-api-log" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.668380 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.677319 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.677352 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52e155b0-d550-44b3-b70c-515a97c03df3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.696744 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.696940 4729 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 27 14:34:34 crc kubenswrapper[4729]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6f05b48c-f2f5-4b56-80da-c2135024b61f_0(6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273" Netns:"/var/run/netns/a4928bcb-2ad5-4cad-9c81-7454a3e8281a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273;K8S_POD_UID=6f05b48c-f2f5-4b56-80da-c2135024b61f" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6f05b48c-f2f5-4b56-80da-c2135024b61f]: expected pod UID "6f05b48c-f2f5-4b56-80da-c2135024b61f" but got "0b4b3ce4-58fb-430f-8465-ca0a501a6aba" from Kube API Jan 27 14:34:34 crc kubenswrapper[4729]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 14:34:34 crc kubenswrapper[4729]: > Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.696991 4729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 27 14:34:34 crc kubenswrapper[4729]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_6f05b48c-f2f5-4b56-80da-c2135024b61f_0(6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273" Netns:"/var/run/netns/a4928bcb-2ad5-4cad-9c81-7454a3e8281a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=6c22c7264bd0296b0e1e2bfa516f7a2a19942f8f9636b707769659279326b273;K8S_POD_UID=6f05b48c-f2f5-4b56-80da-c2135024b61f" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/6f05b48c-f2f5-4b56-80da-c2135024b61f]: expected pod UID "6f05b48c-f2f5-4b56-80da-c2135024b61f" but got "0b4b3ce4-58fb-430f-8465-ca0a501a6aba" from Kube API Jan 27 14:34:34 crc kubenswrapper[4729]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 27 14:34:34 crc kubenswrapper[4729]: > pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.780235 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.792572 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.792896 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp8h\" (UniqueName: \"kubernetes.io/projected/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-kube-api-access-brp8h\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.793037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.800528 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.818839 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-77bbbcf5f4-pncs7"] Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.894846 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.895169 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.895388 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp8h\" (UniqueName: \"kubernetes.io/projected/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-kube-api-access-brp8h\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.895514 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.896701 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.900279 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.901164 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-openstack-config-secret\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: I0127 14:34:34.918630 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp8h\" (UniqueName: \"kubernetes.io/projected/0b4b3ce4-58fb-430f-8465-ca0a501a6aba-kube-api-access-brp8h\") pod \"openstackclient\" (UID: \"0b4b3ce4-58fb-430f-8465-ca0a501a6aba\") " pod="openstack/openstackclient" Jan 27 14:34:34 crc kubenswrapper[4729]: E0127 14:34:34.955971 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52e155b0_d550_44b3_b70c_515a97c03df3.slice/crio-2437aa502c488ba1f0d75f490c560dee27b90522947aeb65604fac03ad49d2c4\": RecentStats: unable to find data in memory cache]" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.066469 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.449288 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.454386 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6f05b48c-f2f5-4b56-80da-c2135024b61f" podUID="0b4b3ce4-58fb-430f-8465-ca0a501a6aba" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.464596 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.506270 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle\") pod \"6f05b48c-f2f5-4b56-80da-c2135024b61f\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.506396 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config\") pod \"6f05b48c-f2f5-4b56-80da-c2135024b61f\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.506526 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8s8d\" (UniqueName: \"kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d\") pod \"6f05b48c-f2f5-4b56-80da-c2135024b61f\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.506626 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret\") pod \"6f05b48c-f2f5-4b56-80da-c2135024b61f\" (UID: \"6f05b48c-f2f5-4b56-80da-c2135024b61f\") " Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.507798 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6f05b48c-f2f5-4b56-80da-c2135024b61f" (UID: "6f05b48c-f2f5-4b56-80da-c2135024b61f"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.512066 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6f05b48c-f2f5-4b56-80da-c2135024b61f" (UID: "6f05b48c-f2f5-4b56-80da-c2135024b61f"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.513175 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d" (OuterVolumeSpecName: "kube-api-access-g8s8d") pod "6f05b48c-f2f5-4b56-80da-c2135024b61f" (UID: "6f05b48c-f2f5-4b56-80da-c2135024b61f"). InnerVolumeSpecName "kube-api-access-g8s8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.515063 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f05b48c-f2f5-4b56-80da-c2135024b61f" (UID: "6f05b48c-f2f5-4b56-80da-c2135024b61f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.609702 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.610428 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8s8d\" (UniqueName: \"kubernetes.io/projected/6f05b48c-f2f5-4b56-80da-c2135024b61f-kube-api-access-g8s8d\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.610483 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.610500 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f05b48c-f2f5-4b56-80da-c2135024b61f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.683909 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 27 14:34:35 crc kubenswrapper[4729]: W0127 14:34:35.693060 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b4b3ce4_58fb_430f_8465_ca0a501a6aba.slice/crio-1b50307457e566c93e3bcfa89c86508a7f1dd7e07adfb1db504e06bef539d0f9 WatchSource:0}: Error finding container 1b50307457e566c93e3bcfa89c86508a7f1dd7e07adfb1db504e06bef539d0f9: Status 404 returned error can't find the container with id 1b50307457e566c93e3bcfa89c86508a7f1dd7e07adfb1db504e06bef539d0f9 Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.780919 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 27 14:34:35 crc kubenswrapper[4729]: I0127 14:34:35.838268 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 27 14:34:36 crc kubenswrapper[4729]: I0127 14:34:36.069960 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52e155b0-d550-44b3-b70c-515a97c03df3" path="/var/lib/kubelet/pods/52e155b0-d550-44b3-b70c-515a97c03df3/volumes" Jan 27 14:34:36 crc kubenswrapper[4729]: I0127 14:34:36.073849 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f05b48c-f2f5-4b56-80da-c2135024b61f" path="/var/lib/kubelet/pods/6f05b48c-f2f5-4b56-80da-c2135024b61f/volumes" Jan 27 14:34:36 crc kubenswrapper[4729]: I0127 14:34:36.460561 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 27 14:34:36 crc kubenswrapper[4729]: I0127 14:34:36.460643 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4b3ce4-58fb-430f-8465-ca0a501a6aba","Type":"ContainerStarted","Data":"1b50307457e566c93e3bcfa89c86508a7f1dd7e07adfb1db504e06bef539d0f9"} Jan 27 14:34:36 crc kubenswrapper[4729]: I0127 14:34:36.467773 4729 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="6f05b48c-f2f5-4b56-80da-c2135024b61f" podUID="0b4b3ce4-58fb-430f-8465-ca0a501a6aba" Jan 27 14:34:39 crc kubenswrapper[4729]: I0127 14:34:39.051304 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:34:39 crc kubenswrapper[4729]: E0127 14:34:39.052574 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.011179 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.012214 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-central-agent" containerID="cri-o://3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.013333 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" containerID="cri-o://a8b637b648095314f70a195afed3cdca299eb6d3d85459853a268076e9eb92fd" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.013433 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="sg-core" containerID="cri-o://37c8bb492c96573e7bd89e066c94f79053eaeb46fb84d6480b56c9bdd299adc9" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.013507 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-notification-agent" containerID="cri-o://eaa41ab6f4f516d8673a10a1637de426722cb259bcfc15ac0e410692f05be989" gracePeriod=30 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.024546 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": read tcp 10.217.0.2:59890->10.217.0.202:3000: read: connection reset by peer" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.046962 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.201320 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-849d9cdd4f-w5qzz"] Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.204600 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.207941 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.208332 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.214739 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.216641 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-849d9cdd4f-w5qzz"] Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239154 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-config-data\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239215 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptl7\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-kube-api-access-9ptl7\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239245 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-combined-ca-bundle\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239263 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-etc-swift\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239357 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-internal-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239375 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-public-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239434 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-run-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.239459 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-log-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346274 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ptl7\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-kube-api-access-9ptl7\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346400 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-combined-ca-bundle\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346445 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-etc-swift\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346748 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-internal-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346781 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-public-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.346975 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-run-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.347041 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-log-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.347168 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-config-data\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.350842 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-log-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.350979 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39caa2da-8dac-4581-8e89-2b7f3b013b8c-run-httpd\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.360871 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-public-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.362781 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-config-data\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.364395 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-combined-ca-bundle\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.364833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39caa2da-8dac-4581-8e89-2b7f3b013b8c-internal-tls-certs\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.364861 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-etc-swift\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.388427 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ptl7\" (UniqueName: \"kubernetes.io/projected/39caa2da-8dac-4581-8e89-2b7f3b013b8c-kube-api-access-9ptl7\") pod \"swift-proxy-849d9cdd4f-w5qzz\" (UID: \"39caa2da-8dac-4581-8e89-2b7f3b013b8c\") " pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.616343 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752206 4729 generic.go:334] "Generic (PLEG): container finished" podID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerID="a8b637b648095314f70a195afed3cdca299eb6d3d85459853a268076e9eb92fd" exitCode=0 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752255 4729 generic.go:334] "Generic (PLEG): container finished" podID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerID="37c8bb492c96573e7bd89e066c94f79053eaeb46fb84d6480b56c9bdd299adc9" exitCode=2 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752266 4729 generic.go:334] "Generic (PLEG): container finished" podID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerID="3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f" exitCode=0 Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752290 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerDied","Data":"a8b637b648095314f70a195afed3cdca299eb6d3d85459853a268076e9eb92fd"} Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752320 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerDied","Data":"37c8bb492c96573e7bd89e066c94f79053eaeb46fb84d6480b56c9bdd299adc9"} Jan 27 14:34:41 crc kubenswrapper[4729]: I0127 14:34:41.752333 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerDied","Data":"3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f"} Jan 27 14:34:41 crc kubenswrapper[4729]: E0127 14:34:41.971762 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf766e8ed_e4f1_4fb1_8a92_cdacd0c86969.slice/crio-3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:34:45 crc kubenswrapper[4729]: I0127 14:34:45.197918 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Jan 27 14:34:45 crc kubenswrapper[4729]: I0127 14:34:45.817193 4729 generic.go:334] "Generic (PLEG): container finished" podID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerID="eaa41ab6f4f516d8673a10a1637de426722cb259bcfc15ac0e410692f05be989" exitCode=0 Jan 27 14:34:45 crc kubenswrapper[4729]: I0127 14:34:45.817374 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerDied","Data":"eaa41ab6f4f516d8673a10a1637de426722cb259bcfc15ac0e410692f05be989"} Jan 27 14:34:47 crc kubenswrapper[4729]: I0127 14:34:47.306909 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.176653 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.188793 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.192921 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.194901 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.208010 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-blqsf" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.257417 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.306495 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.306556 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7q9f\" (UniqueName: \"kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.306634 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.306678 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.417448 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.417859 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.417911 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7q9f\" (UniqueName: \"kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.418026 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.440178 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.451066 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.453157 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7q9f\" (UniqueName: \"kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.454053 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data\") pod \"heat-engine-6cf6fb876b-qnrdg\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.463492 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.468037 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.490987 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.494159 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.511594 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.530285 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.567234 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.592533 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.594239 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.601677 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.627013 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630103 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630194 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630256 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630358 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630390 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630430 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.630463 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.633325 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9fp\" (UniqueName: \"kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.633466 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swgj2\" (UniqueName: \"kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.633553 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.641833 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736729 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpsp\" (UniqueName: \"kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736851 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736889 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736922 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.736950 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9fp\" (UniqueName: \"kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737122 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swgj2\" (UniqueName: \"kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737167 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737191 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737233 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737278 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737334 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.737781 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.738428 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.738586 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.745769 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.746448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.764669 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.779852 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9fp\" (UniqueName: \"kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.783993 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.787276 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swgj2\" (UniqueName: \"kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2\") pod \"dnsmasq-dns-688b9f5b49-ggrm2\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.790426 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data\") pod \"heat-cfnapi-7fdc6f4965-8bhj7\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.841568 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.842749 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.843334 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpsp\" (UniqueName: \"kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.845700 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.845868 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.850308 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.851515 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.856681 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.861049 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpsp\" (UniqueName: \"kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp\") pod \"heat-api-5459d8c648-mwnh7\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.866754 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.885585 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.934163 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.950741 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.950840 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951066 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951149 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951173 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt226\" (UniqueName: \"kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951235 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951320 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml\") pod \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\" (UID: \"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969\") " Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.951915 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.954313 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.955399 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f766e8ed-e4f1-4fb1-8a92-cdacd0c86969","Type":"ContainerDied","Data":"e752fb27cae7e7d04da4db8520619e5e5eefdc89b5bd888b58e23e51ebdd8eea"} Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.955463 4729 scope.go:117] "RemoveContainer" containerID="a8b637b648095314f70a195afed3cdca299eb6d3d85459853a268076e9eb92fd" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.955572 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.960285 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts" (OuterVolumeSpecName: "scripts") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:49 crc kubenswrapper[4729]: I0127 14:34:49.960307 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226" (OuterVolumeSpecName: "kube-api-access-pt226") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "kube-api-access-pt226". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.059553 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.059588 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.059602 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.059615 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt226\" (UniqueName: \"kubernetes.io/projected/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-kube-api-access-pt226\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.096218 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.102008 4729 scope.go:117] "RemoveContainer" containerID="37c8bb492c96573e7bd89e066c94f79053eaeb46fb84d6480b56c9bdd299adc9" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.163687 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.219097 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.265472 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data" (OuterVolumeSpecName: "config-data") pod "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" (UID: "f766e8ed-e4f1-4fb1-8a92-cdacd0c86969"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.281619 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.281661 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.399049 4729 scope.go:117] "RemoveContainer" containerID="eaa41ab6f4f516d8673a10a1637de426722cb259bcfc15ac0e410692f05be989" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.510820 4729 scope.go:117] "RemoveContainer" containerID="3bb8f907a60922a5b92bbfe58cb76521842e27f25b454359c062dfe9bd99400f" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.649947 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.682959 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.712840 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-849d9cdd4f-w5qzz"] Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.762968 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:50 crc kubenswrapper[4729]: E0127 14:34:50.763476 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="sg-core" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763496 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="sg-core" Jan 27 14:34:50 crc kubenswrapper[4729]: E0127 14:34:50.763519 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-notification-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763525 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-notification-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: E0127 14:34:50.763539 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763545 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" Jan 27 14:34:50 crc kubenswrapper[4729]: E0127 14:34:50.763565 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-central-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763571 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-central-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763799 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="sg-core" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763812 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="proxy-httpd" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763821 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-central-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.763831 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" containerName="ceilometer-notification-agent" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.765852 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.785326 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.785517 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.805357 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.825756 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.958655 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.958704 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.958781 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncjrk\" (UniqueName: \"kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.959056 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.959149 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.959234 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:50 crc kubenswrapper[4729]: I0127 14:34:50.959297 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.012064 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"0b4b3ce4-58fb-430f-8465-ca0a501a6aba","Type":"ContainerStarted","Data":"74fbb228f24ea262f1fb3fbe4c4199c6f83a01013b15ffdb3c07baae5f58eb3f"} Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.026221 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6fb876b-qnrdg" event={"ID":"4ac544ed-dbcb-46f4-9324-7000feda0230","Type":"ContainerStarted","Data":"d5dbb1a9819c5483ae035861b7ae4a9775b8daa8f5b6c9fd4e9a5fbad4315ae1"} Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.032335 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" event={"ID":"39caa2da-8dac-4581-8e89-2b7f3b013b8c","Type":"ContainerStarted","Data":"4dbf0ac155773dbee5e58be3d1f6284970981d1fb8c8c03a6513398478b188ce"} Jan 27 14:34:51 crc kubenswrapper[4729]: W0127 14:34:51.061508 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3bbc635_af3b_445b_9e20_e690038a4e6b.slice/crio-829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870 WatchSource:0}: Error finding container 829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870: Status 404 returned error can't find the container with id 829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870 Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.062632 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.062702 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.064254 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.064291 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.064354 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncjrk\" (UniqueName: \"kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.064555 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.064651 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.065864 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.068742 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.070496 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.071206 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.074122 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.080653 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.082109 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.122481 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncjrk\" (UniqueName: \"kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk\") pod \"ceilometer-0\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.128709 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.137295 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.294849376 podStartE2EDuration="17.137257957s" podCreationTimestamp="2026-01-27 14:34:34 +0000 UTC" firstStartedPulling="2026-01-27 14:34:35.695104463 +0000 UTC m=+1762.279295467" lastFinishedPulling="2026-01-27 14:34:49.537513044 +0000 UTC m=+1776.121704048" observedRunningTime="2026-01-27 14:34:51.041923181 +0000 UTC m=+1777.626114205" watchObservedRunningTime="2026-01-27 14:34:51.137257957 +0000 UTC m=+1777.721448961" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.245372 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.266405 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:34:51 crc kubenswrapper[4729]: W0127 14:34:51.280410 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94b88542_ca01_4c26_aa13_044dfb3684fc.slice/crio-afa05c4350668edf0e00e0ea43f0f3729dde36648cc50fe3bef5a02e25c885d4 WatchSource:0}: Error finding container afa05c4350668edf0e00e0ea43f0f3729dde36648cc50fe3bef5a02e25c885d4: Status 404 returned error can't find the container with id afa05c4350668edf0e00e0ea43f0f3729dde36648cc50fe3bef5a02e25c885d4 Jan 27 14:34:51 crc kubenswrapper[4729]: I0127 14:34:51.957271 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.054908 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:34:52 crc kubenswrapper[4729]: E0127 14:34:52.055304 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.068090 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f766e8ed-e4f1-4fb1-8a92-cdacd0c86969" path="/var/lib/kubelet/pods/f766e8ed-e4f1-4fb1-8a92-cdacd0c86969/volumes" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.069194 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5459d8c648-mwnh7" event={"ID":"ded19a5c-60ed-44f6-8e89-2b90d707fd66","Type":"ContainerStarted","Data":"1e25f6d90c1a82feba0bfc59dd91162ed7d63b4602b9552eaddeeca0802e3e5f"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.071739 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" event={"ID":"94b88542-ca01-4c26-aa13-044dfb3684fc","Type":"ContainerStarted","Data":"afa05c4350668edf0e00e0ea43f0f3729dde36648cc50fe3bef5a02e25c885d4"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.076490 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6fb876b-qnrdg" event={"ID":"4ac544ed-dbcb-46f4-9324-7000feda0230","Type":"ContainerStarted","Data":"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.079216 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerStarted","Data":"3b704319c1c765b3cb79ce17ff0837f37870ff75b803063b3dd576e4a455af49"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.079375 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.087420 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" event={"ID":"39caa2da-8dac-4581-8e89-2b7f3b013b8c","Type":"ContainerStarted","Data":"60292b6da248649de76911987650c774dcbae086aa828066c0c35c1e3a6e42f2"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.087464 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" event={"ID":"39caa2da-8dac-4581-8e89-2b7f3b013b8c","Type":"ContainerStarted","Data":"cc77fe7d87701a3451c3ed1562718d338968834b0ab8d7c54eddda794768227f"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.087727 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.087753 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.095731 4729 generic.go:334] "Generic (PLEG): container finished" podID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerID="9d0a76d4af9821128fa2b7ebd1d044577dd2a7e4e2ea2cc7c1c3d6b99055045d" exitCode=0 Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.096944 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" event={"ID":"b3bbc635-af3b-445b-9e20-e690038a4e6b","Type":"ContainerDied","Data":"9d0a76d4af9821128fa2b7ebd1d044577dd2a7e4e2ea2cc7c1c3d6b99055045d"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.096998 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" event={"ID":"b3bbc635-af3b-445b-9e20-e690038a4e6b","Type":"ContainerStarted","Data":"829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870"} Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.104610 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6cf6fb876b-qnrdg" podStartSLOduration=3.104584233 podStartE2EDuration="3.104584233s" podCreationTimestamp="2026-01-27 14:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:52.095257564 +0000 UTC m=+1778.679448568" watchObservedRunningTime="2026-01-27 14:34:52.104584233 +0000 UTC m=+1778.688775237" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.170990 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" podStartSLOduration=11.170965697 podStartE2EDuration="11.170965697s" podCreationTimestamp="2026-01-27 14:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:52.125425058 +0000 UTC m=+1778.709616072" watchObservedRunningTime="2026-01-27 14:34:52.170965697 +0000 UTC m=+1778.755156701" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.462236 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85dcfc7bf5-fs787" Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.562742 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.563384 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bf97cd6d4-xv5bh" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-api" containerID="cri-o://9be07918cfb846752e460e6834b8e23657061bac59a7068f67832dba9247d706" gracePeriod=30 Jan 27 14:34:52 crc kubenswrapper[4729]: I0127 14:34:52.563586 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-bf97cd6d4-xv5bh" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-httpd" containerID="cri-o://1cf591707b8587dcc2a35be6612e08e49d29952c42ff3446d035a84b14a7e76c" gracePeriod=30 Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.118073 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerStarted","Data":"e4e442910ba0923662137915c2ceb1c4aa4a12f25271824c2ed01e9130803594"} Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.125496 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" event={"ID":"b3bbc635-af3b-445b-9e20-e690038a4e6b","Type":"ContainerStarted","Data":"22437fc2564b3fb71032a61c7d785899d431364ba912efc3cc173c497092a868"} Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.126755 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.130501 4729 generic.go:334] "Generic (PLEG): container finished" podID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerID="1cf591707b8587dcc2a35be6612e08e49d29952c42ff3446d035a84b14a7e76c" exitCode=0 Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.130582 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerDied","Data":"1cf591707b8587dcc2a35be6612e08e49d29952c42ff3446d035a84b14a7e76c"} Jan 27 14:34:53 crc kubenswrapper[4729]: I0127 14:34:53.149590 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" podStartSLOduration=4.149566775 podStartE2EDuration="4.149566775s" podCreationTimestamp="2026-01-27 14:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:53.147169589 +0000 UTC m=+1779.731360613" watchObservedRunningTime="2026-01-27 14:34:53.149566775 +0000 UTC m=+1779.733757779" Jan 27 14:34:54 crc kubenswrapper[4729]: I0127 14:34:54.044221 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.070717 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.077493 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.092553 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.094139 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.118488 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.132777 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.134563 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.147811 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200504 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vkh\" (UniqueName: \"kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200639 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200742 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200782 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9kn\" (UniqueName: \"kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200837 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9k2\" (UniqueName: \"kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.200925 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201273 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201603 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201628 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201805 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.201833 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.216182 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9k2\" (UniqueName: \"kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304590 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304629 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304686 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304711 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304726 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304790 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304810 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304838 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vkh\" (UniqueName: \"kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304870 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304954 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.304979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9kn\" (UniqueName: \"kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.312791 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.315891 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.327125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.328158 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.331179 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.331846 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.332260 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.336454 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9k2\" (UniqueName: \"kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.339682 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle\") pod \"heat-cfnapi-ddd9fd968-xf5xj\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.339911 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.342318 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9kn\" (UniqueName: \"kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn\") pod \"heat-api-5579488986-4dxmp\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.364578 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vkh\" (UniqueName: \"kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh\") pod \"heat-engine-d9cbb5dfc-kxwhh\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.410654 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.417809 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.517328 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.637262 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:56 crc kubenswrapper[4729]: I0127 14:34:56.670668 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.208271 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.277927 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5459d8c648-mwnh7" event={"ID":"ded19a5c-60ed-44f6-8e89-2b90d707fd66","Type":"ContainerStarted","Data":"0e38433e67cde8b19aec50eed7c3542afa58493a78328622162cb2b3bb787e78"} Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.279300 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.296012 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" event={"ID":"94b88542-ca01-4c26-aa13-044dfb3684fc","Type":"ContainerStarted","Data":"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731"} Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.297310 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.316269 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5459d8c648-mwnh7" podStartSLOduration=3.095565013 podStartE2EDuration="8.316242901s" podCreationTimestamp="2026-01-27 14:34:49 +0000 UTC" firstStartedPulling="2026-01-27 14:34:51.076159967 +0000 UTC m=+1777.660350981" lastFinishedPulling="2026-01-27 14:34:56.296837865 +0000 UTC m=+1782.881028869" observedRunningTime="2026-01-27 14:34:57.302918702 +0000 UTC m=+1783.887109706" watchObservedRunningTime="2026-01-27 14:34:57.316242901 +0000 UTC m=+1783.900433905" Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.354536 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" podStartSLOduration=3.409127573 podStartE2EDuration="8.354515449s" podCreationTimestamp="2026-01-27 14:34:49 +0000 UTC" firstStartedPulling="2026-01-27 14:34:51.319591348 +0000 UTC m=+1777.903782352" lastFinishedPulling="2026-01-27 14:34:56.264979224 +0000 UTC m=+1782.849170228" observedRunningTime="2026-01-27 14:34:57.334497535 +0000 UTC m=+1783.918688549" watchObservedRunningTime="2026-01-27 14:34:57.354515449 +0000 UTC m=+1783.938706453" Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.403961 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5579488986-4dxmp" event={"ID":"fd036f5b-4ca1-4f6e-99b8-42fe511561dd","Type":"ContainerStarted","Data":"e4bb4727714fc54028552f6b0e6f3ceee86449b45bf66a691ac45b1b6f960f98"} Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.463815 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:34:57 crc kubenswrapper[4729]: W0127 14:34:57.487374 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24dbee0b_07e8_420f_af0a_83c8d5ccf70f.slice/crio-2aec6916665028314b356c1bea6a1edd5bcbdf9ffe97bb004cd6d0a096cd68e9 WatchSource:0}: Error finding container 2aec6916665028314b356c1bea6a1edd5bcbdf9ffe97bb004cd6d0a096cd68e9: Status 404 returned error can't find the container with id 2aec6916665028314b356c1bea6a1edd5bcbdf9ffe97bb004cd6d0a096cd68e9 Jan 27 14:34:57 crc kubenswrapper[4729]: I0127 14:34:57.536254 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:34:58 crc kubenswrapper[4729]: I0127 14:34:58.445316 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" event={"ID":"24dbee0b-07e8-420f-af0a-83c8d5ccf70f","Type":"ContainerStarted","Data":"2aec6916665028314b356c1bea6a1edd5bcbdf9ffe97bb004cd6d0a096cd68e9"} Jan 27 14:34:58 crc kubenswrapper[4729]: I0127 14:34:58.450292 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerStarted","Data":"b6be0fcda697cfc1e6537b30c423615e44b8b583b0b5723cadcc4fabcb27fa4e"} Jan 27 14:34:58 crc kubenswrapper[4729]: I0127 14:34:58.463712 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" event={"ID":"6976f926-e4ad-496f-926f-4e57870ba474","Type":"ContainerStarted","Data":"0841d077abed928a41a5c3edc96009230d20461964fb514692ed718fb2da7ac8"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.475976 4729 generic.go:334] "Generic (PLEG): container finished" podID="6976f926-e4ad-496f-926f-4e57870ba474" containerID="3305090c4f8c8a991b2e339c732cfc7e06aabce9c691c09b5a42c5ca947a3802" exitCode=1 Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.476050 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" event={"ID":"6976f926-e4ad-496f-926f-4e57870ba474","Type":"ContainerDied","Data":"3305090c4f8c8a991b2e339c732cfc7e06aabce9c691c09b5a42c5ca947a3802"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.476951 4729 scope.go:117] "RemoveContainer" containerID="3305090c4f8c8a991b2e339c732cfc7e06aabce9c691c09b5a42c5ca947a3802" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.480476 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerID="beccb3ccc99aea0cc83717320d7de6951335067169f3c5ff67d8ac90e366b560" exitCode=1 Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.480578 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5579488986-4dxmp" event={"ID":"fd036f5b-4ca1-4f6e-99b8-42fe511561dd","Type":"ContainerDied","Data":"beccb3ccc99aea0cc83717320d7de6951335067169f3c5ff67d8ac90e366b560"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.481577 4729 scope.go:117] "RemoveContainer" containerID="beccb3ccc99aea0cc83717320d7de6951335067169f3c5ff67d8ac90e366b560" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.483949 4729 generic.go:334] "Generic (PLEG): container finished" podID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerID="9be07918cfb846752e460e6834b8e23657061bac59a7068f67832dba9247d706" exitCode=0 Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.484124 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerDied","Data":"9be07918cfb846752e460e6834b8e23657061bac59a7068f67832dba9247d706"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.484178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf97cd6d4-xv5bh" event={"ID":"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5","Type":"ContainerDied","Data":"00fb40e2bbd07d3d86a607dfdd693d49807da2ddaf069950178a36331bb05963"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.484194 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00fb40e2bbd07d3d86a607dfdd693d49807da2ddaf069950178a36331bb05963" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.507316 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" event={"ID":"24dbee0b-07e8-420f-af0a-83c8d5ccf70f","Type":"ContainerStarted","Data":"23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.507482 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.510658 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.523069 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerStarted","Data":"c75feda8e12030e1f57fb2c1a0f76b1d48d08e554dccb9d3c838c90fa59dc782"} Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.574896 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" podStartSLOduration=3.57485999 podStartE2EDuration="3.57485999s" podCreationTimestamp="2026-01-27 14:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:34:59.542408683 +0000 UTC m=+1786.126599687" watchObservedRunningTime="2026-01-27 14:34:59.57485999 +0000 UTC m=+1786.159050994" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.625110 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config\") pod \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.626918 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle\") pod \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.626980 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config\") pod \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.627193 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntn7j\" (UniqueName: \"kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j\") pod \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.627419 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs\") pod \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\" (UID: \"ebd761c8-ec86-476b-9ac3-9c35bbb8eae5\") " Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.636330 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" (UID: "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.642435 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j" (OuterVolumeSpecName: "kube-api-access-ntn7j") pod "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" (UID: "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5"). InnerVolumeSpecName "kube-api-access-ntn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.725974 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" (UID: "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.733012 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.733053 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntn7j\" (UniqueName: \"kubernetes.io/projected/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-kube-api-access-ntn7j\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.733070 4729 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.758165 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config" (OuterVolumeSpecName: "config") pod "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" (UID: "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.775746 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" (UID: "ebd761c8-ec86-476b-9ac3-9c35bbb8eae5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.835767 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.835808 4729 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.843032 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.944353 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:34:59 crc kubenswrapper[4729]: I0127 14:34:59.944722 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="dnsmasq-dns" containerID="cri-o://f0b2ae95927b5264693f1c8e2510a1e221f2de9011dce389616509879143eb00" gracePeriod=10 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.505170 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.577027 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.622924 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:35:00 crc kubenswrapper[4729]: E0127 14:35:00.623726 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-api" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.623750 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-api" Jan 27 14:35:00 crc kubenswrapper[4729]: E0127 14:35:00.623771 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-httpd" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.623780 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-httpd" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.624092 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-httpd" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.630357 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" containerName="neutron-api" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.637738 4729 generic.go:334] "Generic (PLEG): container finished" podID="6976f926-e4ad-496f-926f-4e57870ba474" containerID="facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38" exitCode=1 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.638767 4729 scope.go:117] "RemoveContainer" containerID="facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.643154 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" event={"ID":"6976f926-e4ad-496f-926f-4e57870ba474","Type":"ContainerDied","Data":"facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38"} Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.643261 4729 scope.go:117] "RemoveContainer" containerID="3305090c4f8c8a991b2e339c732cfc7e06aabce9c691c09b5a42c5ca947a3802" Jan 27 14:35:00 crc kubenswrapper[4729]: E0127 14:35:00.643197 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-ddd9fd968-xf5xj_openstack(6976f926-e4ad-496f-926f-4e57870ba474)\"" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" podUID="6976f926-e4ad-496f-926f-4e57870ba474" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.643480 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.647442 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.647604 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.672242 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.678336 4729 generic.go:334] "Generic (PLEG): container finished" podID="ef6b3230-207e-4f3b-b095-dae641faebef" containerID="f0b2ae95927b5264693f1c8e2510a1e221f2de9011dce389616509879143eb00" exitCode=0 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.678445 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" event={"ID":"ef6b3230-207e-4f3b-b095-dae641faebef","Type":"ContainerDied","Data":"f0b2ae95927b5264693f1c8e2510a1e221f2de9011dce389616509879143eb00"} Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.696417 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.704910 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerID="7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2" exitCode=1 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.705211 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" podUID="94b88542-ca01-4c26-aa13-044dfb3684fc" containerName="heat-cfnapi" containerID="cri-o://0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731" gracePeriod=60 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.709387 4729 scope.go:117] "RemoveContainer" containerID="7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2" Jan 27 14:35:00 crc kubenswrapper[4729]: E0127 14:35:00.709787 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5579488986-4dxmp_openstack(fd036f5b-4ca1-4f6e-99b8-42fe511561dd)\"" pod="openstack/heat-api-5579488986-4dxmp" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.709995 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5459d8c648-mwnh7" podUID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" containerName="heat-api" containerID="cri-o://0e38433e67cde8b19aec50eed7c3542afa58493a78328622162cb2b3bb787e78" gracePeriod=60 Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.710732 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf97cd6d4-xv5bh" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.726325 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.726367 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5579488986-4dxmp" event={"ID":"fd036f5b-4ca1-4f6e-99b8-42fe511561dd","Type":"ContainerDied","Data":"7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2"} Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.726474 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.728852 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.729128 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.767930 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.767982 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.770450 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.770574 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.770704 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.770773 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvhq\" (UniqueName: \"kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.861277 4729 scope.go:117] "RemoveContainer" containerID="beccb3ccc99aea0cc83717320d7de6951335067169f3c5ff67d8ac90e366b560" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874447 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmvhq\" (UniqueName: \"kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874524 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwsw\" (UniqueName: \"kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874550 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874574 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874622 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874654 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874706 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874736 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874810 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874851 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874915 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.874996 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.885731 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.886448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.887507 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.889119 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.895750 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmvhq\" (UniqueName: \"kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.897231 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs\") pod \"heat-api-55554cf5d6-rht7q\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.898019 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.913171 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bf97cd6d4-xv5bh"] Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.923137 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.977681 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.977919 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwsw\" (UniqueName: \"kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.977954 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.977989 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.978128 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:00 crc kubenswrapper[4729]: I0127 14:35:00.978180 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.003394 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.003496 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.004265 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwsw\" (UniqueName: \"kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.004277 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.004291 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.004899 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle\") pod \"heat-cfnapi-5fc78b9bfd-chbvw\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079435 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd7qc\" (UniqueName: \"kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079485 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079710 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079736 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079776 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.079807 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0\") pod \"ef6b3230-207e-4f3b-b095-dae641faebef\" (UID: \"ef6b3230-207e-4f3b-b095-dae641faebef\") " Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.085747 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc" (OuterVolumeSpecName: "kube-api-access-pd7qc") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "kube-api-access-pd7qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.166787 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.171214 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.173612 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.175389 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.183768 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd7qc\" (UniqueName: \"kubernetes.io/projected/ef6b3230-207e-4f3b-b095-dae641faebef-kube-api-access-pd7qc\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.184072 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.184151 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.184221 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.209574 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.221482 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config" (OuterVolumeSpecName: "config") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.237455 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef6b3230-207e-4f3b-b095-dae641faebef" (UID: "ef6b3230-207e-4f3b-b095-dae641faebef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.286455 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.310352 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef6b3230-207e-4f3b-b095-dae641faebef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.422035 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.422119 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.518535 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.518583 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.759449 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerStarted","Data":"0ce7ecf61234150d6c951412f700d84da87e54a7444d3d99ad377fd084b677a5"} Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.759561 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-central-agent" containerID="cri-o://e4e442910ba0923662137915c2ceb1c4aa4a12f25271824c2ed01e9130803594" gracePeriod=30 Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.759958 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.759992 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-notification-agent" containerID="cri-o://b6be0fcda697cfc1e6537b30c423615e44b8b583b0b5723cadcc4fabcb27fa4e" gracePeriod=30 Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.759962 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="sg-core" containerID="cri-o://c75feda8e12030e1f57fb2c1a0f76b1d48d08e554dccb9d3c838c90fa59dc782" gracePeriod=30 Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.760007 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="proxy-httpd" containerID="cri-o://0ce7ecf61234150d6c951412f700d84da87e54a7444d3d99ad377fd084b677a5" gracePeriod=30 Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.770024 4729 scope.go:117] "RemoveContainer" containerID="facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38" Jan 27 14:35:01 crc kubenswrapper[4729]: E0127 14:35:01.777224 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-ddd9fd968-xf5xj_openstack(6976f926-e4ad-496f-926f-4e57870ba474)\"" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" podUID="6976f926-e4ad-496f-926f-4e57870ba474" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.789591 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.33276255 podStartE2EDuration="11.789569695s" podCreationTimestamp="2026-01-27 14:34:50 +0000 UTC" firstStartedPulling="2026-01-27 14:34:51.994092287 +0000 UTC m=+1778.578283291" lastFinishedPulling="2026-01-27 14:35:00.450899432 +0000 UTC m=+1787.035090436" observedRunningTime="2026-01-27 14:35:01.783853737 +0000 UTC m=+1788.368044761" watchObservedRunningTime="2026-01-27 14:35:01.789569695 +0000 UTC m=+1788.373760699" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.804104 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.808507 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-mg9rv" event={"ID":"ef6b3230-207e-4f3b-b095-dae641faebef","Type":"ContainerDied","Data":"942cb605ab964db4954782349787aa62e2c8445b9b4e5b4cbc393cd0404e2d62"} Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.808718 4729 scope.go:117] "RemoveContainer" containerID="f0b2ae95927b5264693f1c8e2510a1e221f2de9011dce389616509879143eb00" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.832397 4729 generic.go:334] "Generic (PLEG): container finished" podID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" containerID="0e38433e67cde8b19aec50eed7c3542afa58493a78328622162cb2b3bb787e78" exitCode=0 Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.832462 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5459d8c648-mwnh7" event={"ID":"ded19a5c-60ed-44f6-8e89-2b90d707fd66","Type":"ContainerDied","Data":"0e38433e67cde8b19aec50eed7c3542afa58493a78328622162cb2b3bb787e78"} Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.841027 4729 scope.go:117] "RemoveContainer" containerID="7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2" Jan 27 14:35:01 crc kubenswrapper[4729]: E0127 14:35:01.846444 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5579488986-4dxmp_openstack(fd036f5b-4ca1-4f6e-99b8-42fe511561dd)\"" pod="openstack/heat-api-5579488986-4dxmp" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" Jan 27 14:35:01 crc kubenswrapper[4729]: I0127 14:35:01.871069 4729 scope.go:117] "RemoveContainer" containerID="3e322e83c434c8d0c7cbe6a175816d709ac3e10c3c041485fdf04e88c8f64bae" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.028439 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.039350 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-mg9rv"] Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.069739 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebd761c8-ec86-476b-9ac3-9c35bbb8eae5" path="/var/lib/kubelet/pods/ebd761c8-ec86-476b-9ac3-9c35bbb8eae5/volumes" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.086387 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" path="/var/lib/kubelet/pods/ef6b3230-207e-4f3b-b095-dae641faebef/volumes" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.198639 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.268243 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.837735 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.846079 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.854680 4729 generic.go:334] "Generic (PLEG): container finished" podID="94b88542-ca01-4c26-aa13-044dfb3684fc" containerID="0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731" exitCode=0 Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.854778 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" event={"ID":"94b88542-ca01-4c26-aa13-044dfb3684fc","Type":"ContainerDied","Data":"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.854813 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" event={"ID":"94b88542-ca01-4c26-aa13-044dfb3684fc","Type":"ContainerDied","Data":"afa05c4350668edf0e00e0ea43f0f3729dde36648cc50fe3bef5a02e25c885d4"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.854833 4729 scope.go:117] "RemoveContainer" containerID="0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.854994 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fdc6f4965-8bhj7" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872260 4729 generic.go:334] "Generic (PLEG): container finished" podID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerID="0ce7ecf61234150d6c951412f700d84da87e54a7444d3d99ad377fd084b677a5" exitCode=0 Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872301 4729 generic.go:334] "Generic (PLEG): container finished" podID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerID="c75feda8e12030e1f57fb2c1a0f76b1d48d08e554dccb9d3c838c90fa59dc782" exitCode=2 Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872313 4729 generic.go:334] "Generic (PLEG): container finished" podID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerID="b6be0fcda697cfc1e6537b30c423615e44b8b583b0b5723cadcc4fabcb27fa4e" exitCode=0 Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872366 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerDied","Data":"0ce7ecf61234150d6c951412f700d84da87e54a7444d3d99ad377fd084b677a5"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872396 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerDied","Data":"c75feda8e12030e1f57fb2c1a0f76b1d48d08e554dccb9d3c838c90fa59dc782"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.872449 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerDied","Data":"b6be0fcda697cfc1e6537b30c423615e44b8b583b0b5723cadcc4fabcb27fa4e"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.898940 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55554cf5d6-rht7q" event={"ID":"21a1bec4-36d5-438f-8575-665c6edad962","Type":"ContainerStarted","Data":"8bbafd3084ae569ed949d047dfd6fa7af73a9616e38cba76722b769415dd38e1"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.913439 4729 scope.go:117] "RemoveContainer" containerID="0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731" Jan 27 14:35:02 crc kubenswrapper[4729]: E0127 14:35:02.914581 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731\": container with ID starting with 0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731 not found: ID does not exist" containerID="0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.914609 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731"} err="failed to get container status \"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731\": rpc error: code = NotFound desc = could not find container \"0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731\": container with ID starting with 0c8e50c68934b8dac480f0b2297e379bdad08d44feaae6b6b52b1c66df25e731 not found: ID does not exist" Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.914776 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" event={"ID":"5f89791b-c8ab-4897-8989-b3aab9352e5e","Type":"ContainerStarted","Data":"f5f8b78b07a028f192a486c6f27117bbec1c9da53de3ca4a4c818eea75d33442"} Jan 27 14:35:02 crc kubenswrapper[4729]: I0127 14:35:02.943673 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5459d8c648-mwnh7" event={"ID":"ded19a5c-60ed-44f6-8e89-2b90d707fd66","Type":"ContainerDied","Data":"1e25f6d90c1a82feba0bfc59dd91162ed7d63b4602b9552eaddeeca0802e3e5f"} Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.943735 4729 scope.go:117] "RemoveContainer" containerID="0e38433e67cde8b19aec50eed7c3542afa58493a78328622162cb2b3bb787e78" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.944217 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5459d8c648-mwnh7" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.944558 4729 scope.go:117] "RemoveContainer" containerID="7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2" Jan 27 14:35:05 crc kubenswrapper[4729]: E0127 14:35:02.944793 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5579488986-4dxmp_openstack(fd036f5b-4ca1-4f6e-99b8-42fe511561dd)\"" pod="openstack/heat-api-5579488986-4dxmp" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.947770 4729 scope.go:117] "RemoveContainer" containerID="facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38" Jan 27 14:35:05 crc kubenswrapper[4729]: E0127 14:35:02.948107 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-ddd9fd968-xf5xj_openstack(6976f926-e4ad-496f-926f-4e57870ba474)\"" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" podUID="6976f926-e4ad-496f-926f-4e57870ba474" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999273 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data\") pod \"94b88542-ca01-4c26-aa13-044dfb3684fc\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999349 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v9fp\" (UniqueName: \"kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp\") pod \"94b88542-ca01-4c26-aa13-044dfb3684fc\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999422 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle\") pod \"94b88542-ca01-4c26-aa13-044dfb3684fc\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999479 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom\") pod \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999508 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krpsp\" (UniqueName: \"kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp\") pod \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999640 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom\") pod \"94b88542-ca01-4c26-aa13-044dfb3684fc\" (UID: \"94b88542-ca01-4c26-aa13-044dfb3684fc\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999687 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data\") pod \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:02.999755 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle\") pod \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\" (UID: \"ded19a5c-60ed-44f6-8e89-2b90d707fd66\") " Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.010323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp" (OuterVolumeSpecName: "kube-api-access-krpsp") pod "ded19a5c-60ed-44f6-8e89-2b90d707fd66" (UID: "ded19a5c-60ed-44f6-8e89-2b90d707fd66"). InnerVolumeSpecName "kube-api-access-krpsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.012981 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp" (OuterVolumeSpecName: "kube-api-access-6v9fp") pod "94b88542-ca01-4c26-aa13-044dfb3684fc" (UID: "94b88542-ca01-4c26-aa13-044dfb3684fc"). InnerVolumeSpecName "kube-api-access-6v9fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.013821 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ded19a5c-60ed-44f6-8e89-2b90d707fd66" (UID: "ded19a5c-60ed-44f6-8e89-2b90d707fd66"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.029298 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94b88542-ca01-4c26-aa13-044dfb3684fc" (UID: "94b88542-ca01-4c26-aa13-044dfb3684fc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.074604 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94b88542-ca01-4c26-aa13-044dfb3684fc" (UID: "94b88542-ca01-4c26-aa13-044dfb3684fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.104169 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ded19a5c-60ed-44f6-8e89-2b90d707fd66" (UID: "ded19a5c-60ed-44f6-8e89-2b90d707fd66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119307 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119350 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119362 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v9fp\" (UniqueName: \"kubernetes.io/projected/94b88542-ca01-4c26-aa13-044dfb3684fc-kube-api-access-6v9fp\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119376 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119389 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.119406 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krpsp\" (UniqueName: \"kubernetes.io/projected/ded19a5c-60ed-44f6-8e89-2b90d707fd66-kube-api-access-krpsp\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.207230 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data" (OuterVolumeSpecName: "config-data") pod "ded19a5c-60ed-44f6-8e89-2b90d707fd66" (UID: "ded19a5c-60ed-44f6-8e89-2b90d707fd66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.223396 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ded19a5c-60ed-44f6-8e89-2b90d707fd66-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.268081 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data" (OuterVolumeSpecName: "config-data") pod "94b88542-ca01-4c26-aa13-044dfb3684fc" (UID: "94b88542-ca01-4c26-aa13-044dfb3684fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.328674 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b88542-ca01-4c26-aa13-044dfb3684fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.334508 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.424042 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5459d8c648-mwnh7"] Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.565628 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.607921 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7fdc6f4965-8bhj7"] Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.986019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" event={"ID":"5f89791b-c8ab-4897-8989-b3aab9352e5e","Type":"ContainerStarted","Data":"7663c3e0b22990aab84ea7e2654791b5fc9f5e1cf8019071e5dbec8bb277fbf0"} Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:03.986433 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.024680 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55554cf5d6-rht7q" event={"ID":"21a1bec4-36d5-438f-8575-665c6edad962","Type":"ContainerStarted","Data":"793730f20c62a769cf31857f01a74079c290c34d6aec6ec0ed627da23c338e04"} Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.024925 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.030021 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" podStartSLOduration=4.030001251 podStartE2EDuration="4.030001251s" podCreationTimestamp="2026-01-27 14:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:04.01406038 +0000 UTC m=+1790.598251394" watchObservedRunningTime="2026-01-27 14:35:04.030001251 +0000 UTC m=+1790.614192255" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.060790 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-55554cf5d6-rht7q" podStartSLOduration=4.060772822 podStartE2EDuration="4.060772822s" podCreationTimestamp="2026-01-27 14:35:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:04.043966217 +0000 UTC m=+1790.628157241" watchObservedRunningTime="2026-01-27 14:35:04.060772822 +0000 UTC m=+1790.644963826" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.081770 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b88542-ca01-4c26-aa13-044dfb3684fc" path="/var/lib/kubelet/pods/94b88542-ca01-4c26-aa13-044dfb3684fc/volumes" Jan 27 14:35:05 crc kubenswrapper[4729]: I0127 14:35:04.082706 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" path="/var/lib/kubelet/pods/ded19a5c-60ed-44f6-8e89-2b90d707fd66/volumes" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.050649 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.051167 4729 generic.go:334] "Generic (PLEG): container finished" podID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerID="e4e442910ba0923662137915c2ceb1c4aa4a12f25271824c2ed01e9130803594" exitCode=0 Jan 27 14:35:06 crc kubenswrapper[4729]: E0127 14:35:06.051459 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.064594 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerDied","Data":"e4e442910ba0923662137915c2ceb1c4aa4a12f25271824c2ed01e9130803594"} Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.064647 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d090e5ba-bd3e-444f-9d7f-cbe3151f17df","Type":"ContainerDied","Data":"3b704319c1c765b3cb79ce17ff0837f37870ff75b803063b3dd576e4a455af49"} Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.064658 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b704319c1c765b3cb79ce17ff0837f37870ff75b803063b3dd576e4a455af49" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.065250 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221381 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221586 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221637 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221677 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncjrk\" (UniqueName: \"kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221718 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221801 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.221842 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml\") pod \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\" (UID: \"d090e5ba-bd3e-444f-9d7f-cbe3151f17df\") " Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.222190 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.222306 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.223562 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.223594 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.230145 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk" (OuterVolumeSpecName: "kube-api-access-ncjrk") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "kube-api-access-ncjrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.250477 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts" (OuterVolumeSpecName: "scripts") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.262300 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.334140 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncjrk\" (UniqueName: \"kubernetes.io/projected/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-kube-api-access-ncjrk\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.334175 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.334187 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.365304 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.390117 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data" (OuterVolumeSpecName: "config-data") pod "d090e5ba-bd3e-444f-9d7f-cbe3151f17df" (UID: "d090e5ba-bd3e-444f-9d7f-cbe3151f17df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.436600 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:06 crc kubenswrapper[4729]: I0127 14:35:06.436635 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d090e5ba-bd3e-444f-9d7f-cbe3151f17df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.061208 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.110937 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.131283 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.149690 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150262 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="proxy-httpd" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150288 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="proxy-httpd" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150310 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="init" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150317 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="init" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150332 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b88542-ca01-4c26-aa13-044dfb3684fc" containerName="heat-cfnapi" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150341 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b88542-ca01-4c26-aa13-044dfb3684fc" containerName="heat-cfnapi" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150373 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-central-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150382 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-central-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150401 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="dnsmasq-dns" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150408 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="dnsmasq-dns" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150424 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" containerName="heat-api" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150432 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" containerName="heat-api" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150454 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-notification-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150461 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-notification-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: E0127 14:35:07.150470 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="sg-core" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150479 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="sg-core" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150745 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-central-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150761 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="proxy-httpd" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150770 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b88542-ca01-4c26-aa13-044dfb3684fc" containerName="heat-cfnapi" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150781 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6b3230-207e-4f3b-b095-dae641faebef" containerName="dnsmasq-dns" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150791 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="sg-core" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150801 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" containerName="ceilometer-notification-agent" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.150816 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded19a5c-60ed-44f6-8e89-2b90d707fd66" containerName="heat-api" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.153035 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.156595 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.157107 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.184073 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.255547 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j96\" (UniqueName: \"kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.255727 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.255788 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.255840 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.256064 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.256323 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.256394 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358245 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358605 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358708 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358741 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358771 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j96\" (UniqueName: \"kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358917 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.358964 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.370180 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.370397 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.370769 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.370817 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.389013 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.389120 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.392452 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j96\" (UniqueName: \"kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96\") pod \"ceilometer-0\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " pod="openstack/ceilometer-0" Jan 27 14:35:07 crc kubenswrapper[4729]: I0127 14:35:07.482485 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:08 crc kubenswrapper[4729]: W0127 14:35:08.009573 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14d3de75_2a85_4dc5_8d62_9febba3f31a7.slice/crio-d5dfab7a5b44a0216db16f9ba61d972a93bb9420dcb73fb7871663b4fe70e28b WatchSource:0}: Error finding container d5dfab7a5b44a0216db16f9ba61d972a93bb9420dcb73fb7871663b4fe70e28b: Status 404 returned error can't find the container with id d5dfab7a5b44a0216db16f9ba61d972a93bb9420dcb73fb7871663b4fe70e28b Jan 27 14:35:08 crc kubenswrapper[4729]: I0127 14:35:08.032104 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:08 crc kubenswrapper[4729]: I0127 14:35:08.065416 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d090e5ba-bd3e-444f-9d7f-cbe3151f17df" path="/var/lib/kubelet/pods/d090e5ba-bd3e-444f-9d7f-cbe3151f17df/volumes" Jan 27 14:35:08 crc kubenswrapper[4729]: I0127 14:35:08.077919 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerStarted","Data":"d5dfab7a5b44a0216db16f9ba61d972a93bb9420dcb73fb7871663b4fe70e28b"} Jan 27 14:35:09 crc kubenswrapper[4729]: I0127 14:35:09.709293 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:35:10 crc kubenswrapper[4729]: I0127 14:35:10.101812 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerStarted","Data":"7f5fe8d125da0eb36da1b5c7ea0ecd3498d08ce7797295ef7c2ca3faccecffd7"} Jan 27 14:35:10 crc kubenswrapper[4729]: I0127 14:35:10.757290 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:12 crc kubenswrapper[4729]: I0127 14:35:12.125041 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerStarted","Data":"6ecd3aa377ff9afe8e2293dccf408bf1792017ea358a6a751dd5e4451bc1e069"} Jan 27 14:35:13 crc kubenswrapper[4729]: I0127 14:35:13.157154 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerStarted","Data":"9330d68680f55ea82cd15e35dbf714d296fd99d013521785f91324046608d215"} Jan 27 14:35:13 crc kubenswrapper[4729]: I0127 14:35:13.273508 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:35:13 crc kubenswrapper[4729]: I0127 14:35:13.351600 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:35:13 crc kubenswrapper[4729]: I0127 14:35:13.495238 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:35:13 crc kubenswrapper[4729]: I0127 14:35:13.586692 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.024908 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.035082 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j9k2\" (UniqueName: \"kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2\") pod \"6976f926-e4ad-496f-926f-4e57870ba474\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.035211 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom\") pod \"6976f926-e4ad-496f-926f-4e57870ba474\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.035352 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data\") pod \"6976f926-e4ad-496f-926f-4e57870ba474\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.035474 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle\") pod \"6976f926-e4ad-496f-926f-4e57870ba474\" (UID: \"6976f926-e4ad-496f-926f-4e57870ba474\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.053360 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6976f926-e4ad-496f-926f-4e57870ba474" (UID: "6976f926-e4ad-496f-926f-4e57870ba474"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.079434 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2" (OuterVolumeSpecName: "kube-api-access-7j9k2") pod "6976f926-e4ad-496f-926f-4e57870ba474" (UID: "6976f926-e4ad-496f-926f-4e57870ba474"). InnerVolumeSpecName "kube-api-access-7j9k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.140263 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j9k2\" (UniqueName: \"kubernetes.io/projected/6976f926-e4ad-496f-926f-4e57870ba474-kube-api-access-7j9k2\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.140701 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.156349 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6976f926-e4ad-496f-926f-4e57870ba474" (UID: "6976f926-e4ad-496f-926f-4e57870ba474"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.196488 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.243148 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.243257 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data" (OuterVolumeSpecName: "config-data") pod "6976f926-e4ad-496f-926f-4e57870ba474" (UID: "6976f926-e4ad-496f-926f-4e57870ba474"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.246950 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ddd9fd968-xf5xj" event={"ID":"6976f926-e4ad-496f-926f-4e57870ba474","Type":"ContainerDied","Data":"0841d077abed928a41a5c3edc96009230d20461964fb514692ed718fb2da7ac8"} Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.247019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5579488986-4dxmp" event={"ID":"fd036f5b-4ca1-4f6e-99b8-42fe511561dd","Type":"ContainerDied","Data":"e4bb4727714fc54028552f6b0e6f3ceee86449b45bf66a691ac45b1b6f960f98"} Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.247036 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bb4727714fc54028552f6b0e6f3ceee86449b45bf66a691ac45b1b6f960f98" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.247061 4729 scope.go:117] "RemoveContainer" containerID="facb3599387ca46ebfd798fb8e4dc8dd869907cd58cd8b3f44ac7591a2c06e38" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.291260 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.345203 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9kn\" (UniqueName: \"kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn\") pod \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.345302 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom\") pod \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.345358 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle\") pod \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.345582 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data\") pod \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\" (UID: \"fd036f5b-4ca1-4f6e-99b8-42fe511561dd\") " Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.346797 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6976f926-e4ad-496f-926f-4e57870ba474-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.349966 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn" (OuterVolumeSpecName: "kube-api-access-5x9kn") pod "fd036f5b-4ca1-4f6e-99b8-42fe511561dd" (UID: "fd036f5b-4ca1-4f6e-99b8-42fe511561dd"). InnerVolumeSpecName "kube-api-access-5x9kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.356398 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd036f5b-4ca1-4f6e-99b8-42fe511561dd" (UID: "fd036f5b-4ca1-4f6e-99b8-42fe511561dd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.385386 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd036f5b-4ca1-4f6e-99b8-42fe511561dd" (UID: "fd036f5b-4ca1-4f6e-99b8-42fe511561dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.415211 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data" (OuterVolumeSpecName: "config-data") pod "fd036f5b-4ca1-4f6e-99b8-42fe511561dd" (UID: "fd036f5b-4ca1-4f6e-99b8-42fe511561dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.450152 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.450190 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9kn\" (UniqueName: \"kubernetes.io/projected/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-kube-api-access-5x9kn\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.450204 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.450383 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd036f5b-4ca1-4f6e-99b8-42fe511561dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.534101 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:35:14 crc kubenswrapper[4729]: I0127 14:35:14.547214 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-ddd9fd968-xf5xj"] Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215099 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerStarted","Data":"6ca5b5aff42dac44bee276ec172356847739391d952fbf43e8cd22ece5591f44"} Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215345 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-central-agent" containerID="cri-o://7f5fe8d125da0eb36da1b5c7ea0ecd3498d08ce7797295ef7c2ca3faccecffd7" gracePeriod=30 Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215480 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="proxy-httpd" containerID="cri-o://6ca5b5aff42dac44bee276ec172356847739391d952fbf43e8cd22ece5591f44" gracePeriod=30 Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215513 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-notification-agent" containerID="cri-o://6ecd3aa377ff9afe8e2293dccf408bf1792017ea358a6a751dd5e4451bc1e069" gracePeriod=30 Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215445 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="sg-core" containerID="cri-o://9330d68680f55ea82cd15e35dbf714d296fd99d013521785f91324046608d215" gracePeriod=30 Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.215751 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.217127 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5579488986-4dxmp" Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.242186 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.253438016 podStartE2EDuration="8.242166259s" podCreationTimestamp="2026-01-27 14:35:07 +0000 UTC" firstStartedPulling="2026-01-27 14:35:08.013538473 +0000 UTC m=+1794.597729477" lastFinishedPulling="2026-01-27 14:35:14.002266716 +0000 UTC m=+1800.586457720" observedRunningTime="2026-01-27 14:35:15.240686508 +0000 UTC m=+1801.824877542" watchObservedRunningTime="2026-01-27 14:35:15.242166259 +0000 UTC m=+1801.826357263" Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.285768 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:35:15 crc kubenswrapper[4729]: I0127 14:35:15.297296 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5579488986-4dxmp"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.065182 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6976f926-e4ad-496f-926f-4e57870ba474" path="/var/lib/kubelet/pods/6976f926-e4ad-496f-926f-4e57870ba474/volumes" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.066241 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" path="/var/lib/kubelet/pods/fd036f5b-4ca1-4f6e-99b8-42fe511561dd/volumes" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.177307 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-226hb"] Jan 27 14:35:16 crc kubenswrapper[4729]: E0127 14:35:16.177761 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.177779 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: E0127 14:35:16.177813 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.177820 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: E0127 14:35:16.177839 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.177847 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.178069 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.178095 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.178111 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd036f5b-4ca1-4f6e-99b8-42fe511561dd" containerName="heat-api" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.178126 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.178899 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.196308 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbs7m\" (UniqueName: \"kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.196366 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.196700 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-226hb"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.243835 4729 generic.go:334] "Generic (PLEG): container finished" podID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerID="6ca5b5aff42dac44bee276ec172356847739391d952fbf43e8cd22ece5591f44" exitCode=0 Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.244325 4729 generic.go:334] "Generic (PLEG): container finished" podID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerID="9330d68680f55ea82cd15e35dbf714d296fd99d013521785f91324046608d215" exitCode=2 Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.244340 4729 generic.go:334] "Generic (PLEG): container finished" podID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerID="6ecd3aa377ff9afe8e2293dccf408bf1792017ea358a6a751dd5e4451bc1e069" exitCode=0 Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.243936 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerDied","Data":"6ca5b5aff42dac44bee276ec172356847739391d952fbf43e8cd22ece5591f44"} Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.244388 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerDied","Data":"9330d68680f55ea82cd15e35dbf714d296fd99d013521785f91324046608d215"} Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.244407 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerDied","Data":"6ecd3aa377ff9afe8e2293dccf408bf1792017ea358a6a751dd5e4451bc1e069"} Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.265224 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zvnjb"] Jan 27 14:35:16 crc kubenswrapper[4729]: E0127 14:35:16.265764 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.265784 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6976f926-e4ad-496f-926f-4e57870ba474" containerName="heat-cfnapi" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.267075 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.291695 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zvnjb"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.317870 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.318004 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbs7m\" (UniqueName: \"kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.318056 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.318088 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdjtb\" (UniqueName: \"kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.322923 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.358210 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbs7m\" (UniqueName: \"kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m\") pod \"nova-api-db-create-226hb\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.409758 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-krsxm"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.412107 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.420012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdjtb\" (UniqueName: \"kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.420344 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.445132 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.452633 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4af9-account-create-update-c7f5v"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.462927 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.472284 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.472957 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdjtb\" (UniqueName: \"kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb\") pod \"nova-cell0-db-create-zvnjb\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.474992 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-krsxm"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.477845 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.494028 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4af9-account-create-update-c7f5v"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.502767 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.522291 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn66r\" (UniqueName: \"kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.522559 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.522636 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bqk\" (UniqueName: \"kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.522679 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.602775 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.621438 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.621684 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6cf6fb876b-qnrdg" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerName="heat-engine" containerID="cri-o://17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" gracePeriod=60 Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.624443 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.624525 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6bqk\" (UniqueName: \"kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.624564 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.624668 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn66r\" (UniqueName: \"kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.625637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.626182 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.656512 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6bqk\" (UniqueName: \"kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk\") pod \"nova-api-4af9-account-create-update-c7f5v\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.664755 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-100f-account-create-update-l78l7"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.666907 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.669721 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.678980 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn66r\" (UniqueName: \"kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r\") pod \"nova-cell1-db-create-krsxm\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.707430 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-100f-account-create-update-l78l7"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.728630 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m4x4\" (UniqueName: \"kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.728864 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.782635 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-996c-account-create-update-vkz4r"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.792748 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.796607 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.804532 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-996c-account-create-update-vkz4r"] Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.831410 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.831855 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9qs\" (UniqueName: \"kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.831896 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.831963 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m4x4\" (UniqueName: \"kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.842678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.847956 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.886634 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m4x4\" (UniqueName: \"kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4\") pod \"nova-cell0-100f-account-create-update-l78l7\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.894862 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.934312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9qs\" (UniqueName: \"kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.934350 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.935175 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.948981 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:16 crc kubenswrapper[4729]: I0127 14:35:16.988738 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9qs\" (UniqueName: \"kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs\") pod \"nova-cell1-996c-account-create-update-vkz4r\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:17 crc kubenswrapper[4729]: I0127 14:35:17.194935 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:17 crc kubenswrapper[4729]: I0127 14:35:17.690901 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-226hb"] Jan 27 14:35:17 crc kubenswrapper[4729]: W0127 14:35:17.694153 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc205222a_ce44_4153_816d_edc3ec8f8240.slice/crio-36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1 WatchSource:0}: Error finding container 36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1: Status 404 returned error can't find the container with id 36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1 Jan 27 14:35:17 crc kubenswrapper[4729]: I0127 14:35:17.703332 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zvnjb"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.109710 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-krsxm"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.233686 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-996c-account-create-update-vkz4r"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.281802 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-100f-account-create-update-l78l7"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.308155 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4af9-account-create-update-c7f5v"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.308791 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zvnjb" event={"ID":"c205222a-ce44-4153-816d-edc3ec8f8240","Type":"ContainerStarted","Data":"36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1"} Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.310204 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-krsxm" event={"ID":"8b863055-1278-4ddd-87fc-5eb337ec92b0","Type":"ContainerStarted","Data":"c3a9abd9d14da711b4d340f9990627fb99a6c68ed4ca865722760a86833c7e5f"} Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.311441 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-226hb" event={"ID":"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce","Type":"ContainerStarted","Data":"d7735991bba2dfa787fad71351f6016a02d56ce010db6e7f354974553b6c62cd"} Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.995072 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.996841 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-log" containerID="cri-o://538099ac1b22a1b548e632269a4ac1a5f801ecf28b1c6e3913d34b01c6ec5efd" gracePeriod=30 Jan 27 14:35:18 crc kubenswrapper[4729]: I0127 14:35:18.996997 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-httpd" containerID="cri-o://9e32dd7451d221d043d9724a8f9a6b7f7d9a0369c27ad7cc736ec0d9250da2f6" gracePeriod=30 Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.323725 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-226hb" event={"ID":"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce","Type":"ContainerStarted","Data":"2d9967418ed1582b15bfd89241e6e200734128faed4093efb58ac1f52c387e3d"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.326906 4729 generic.go:334] "Generic (PLEG): container finished" podID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerID="538099ac1b22a1b548e632269a4ac1a5f801ecf28b1c6e3913d34b01c6ec5efd" exitCode=143 Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.326987 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerDied","Data":"538099ac1b22a1b548e632269a4ac1a5f801ecf28b1c6e3913d34b01c6ec5efd"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.329045 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" event={"ID":"7126250f-09fc-45d2-ba39-636094d89da7","Type":"ContainerStarted","Data":"52f60d42a7c562fa45bc4ab6b659c8a0829890da72d35bfa36ae4428cc59292d"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.329077 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" event={"ID":"7126250f-09fc-45d2-ba39-636094d89da7","Type":"ContainerStarted","Data":"9a782af4a1f5142b52acb050cde756859ed2f624739ada7a4d5d42a560e52638"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.330947 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-100f-account-create-update-l78l7" event={"ID":"ea0fb13c-3315-42a9-9fdc-14492e98546f","Type":"ContainerStarted","Data":"f974db68f0ca2dea7e3810d97492d147a0990c8ca4585933b6e57b7199982326"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.330992 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-100f-account-create-update-l78l7" event={"ID":"ea0fb13c-3315-42a9-9fdc-14492e98546f","Type":"ContainerStarted","Data":"08f64ca7f9fc72ca768b86671476eafa0e54e687fb07d6b8563d684493863102"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.339284 4729 generic.go:334] "Generic (PLEG): container finished" podID="c205222a-ce44-4153-816d-edc3ec8f8240" containerID="4f7ecbc6bb3b3f3a94d9cd054d36fbdc6de58a6eaec3c3044acf4754f69c8dd7" exitCode=0 Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.339363 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zvnjb" event={"ID":"c205222a-ce44-4153-816d-edc3ec8f8240","Type":"ContainerDied","Data":"4f7ecbc6bb3b3f3a94d9cd054d36fbdc6de58a6eaec3c3044acf4754f69c8dd7"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.341953 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4af9-account-create-update-c7f5v" event={"ID":"b53e6829-f0c7-4c38-b258-42230df58947","Type":"ContainerStarted","Data":"eaffbc34719ccfbda5bcef814dcc2d99bb95da1c34b7996c6662d02e2a4efec3"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.342005 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4af9-account-create-update-c7f5v" event={"ID":"b53e6829-f0c7-4c38-b258-42230df58947","Type":"ContainerStarted","Data":"5f2d38ee0b9ec6b7b0e1e1ca731f3c5230f07b185d7bc6c5cb2d1a1e895ebb0c"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.344833 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-krsxm" event={"ID":"8b863055-1278-4ddd-87fc-5eb337ec92b0","Type":"ContainerStarted","Data":"43d0bab29f64ac230df67977d9b8d1484730ac0c2c3b6320e6d4e30ed4049352"} Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.346416 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-226hb" podStartSLOduration=3.3463965780000002 podStartE2EDuration="3.346396578s" podCreationTimestamp="2026-01-27 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:19.343908849 +0000 UTC m=+1805.928099853" watchObservedRunningTime="2026-01-27 14:35:19.346396578 +0000 UTC m=+1805.930587582" Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.378286 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-100f-account-create-update-l78l7" podStartSLOduration=3.378263269 podStartE2EDuration="3.378263269s" podCreationTimestamp="2026-01-27 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:19.357569467 +0000 UTC m=+1805.941760471" watchObservedRunningTime="2026-01-27 14:35:19.378263269 +0000 UTC m=+1805.962454283" Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.393646 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" podStartSLOduration=3.393624834 podStartE2EDuration="3.393624834s" podCreationTimestamp="2026-01-27 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:19.371570954 +0000 UTC m=+1805.955761958" watchObservedRunningTime="2026-01-27 14:35:19.393624834 +0000 UTC m=+1805.977815838" Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.412678 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-4af9-account-create-update-c7f5v" podStartSLOduration=3.4126615 podStartE2EDuration="3.4126615s" podCreationTimestamp="2026-01-27 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:19.401274255 +0000 UTC m=+1805.985465289" watchObservedRunningTime="2026-01-27 14:35:19.4126615 +0000 UTC m=+1805.996852504" Jan 27 14:35:19 crc kubenswrapper[4729]: I0127 14:35:19.433524 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-krsxm" podStartSLOduration=3.433505006 podStartE2EDuration="3.433505006s" podCreationTimestamp="2026-01-27 14:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:19.428129707 +0000 UTC m=+1806.012320711" watchObservedRunningTime="2026-01-27 14:35:19.433505006 +0000 UTC m=+1806.017696010" Jan 27 14:35:19 crc kubenswrapper[4729]: E0127 14:35:19.645720 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:35:19 crc kubenswrapper[4729]: E0127 14:35:19.647546 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:35:19 crc kubenswrapper[4729]: E0127 14:35:19.648847 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:35:19 crc kubenswrapper[4729]: E0127 14:35:19.648969 4729 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6cf6fb876b-qnrdg" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerName="heat-engine" Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.051784 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:35:20 crc kubenswrapper[4729]: E0127 14:35:20.052121 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.357958 4729 generic.go:334] "Generic (PLEG): container finished" podID="8b863055-1278-4ddd-87fc-5eb337ec92b0" containerID="43d0bab29f64ac230df67977d9b8d1484730ac0c2c3b6320e6d4e30ed4049352" exitCode=0 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.358016 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-krsxm" event={"ID":"8b863055-1278-4ddd-87fc-5eb337ec92b0","Type":"ContainerDied","Data":"43d0bab29f64ac230df67977d9b8d1484730ac0c2c3b6320e6d4e30ed4049352"} Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.362546 4729 generic.go:334] "Generic (PLEG): container finished" podID="e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" containerID="2d9967418ed1582b15bfd89241e6e200734128faed4093efb58ac1f52c387e3d" exitCode=0 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.362607 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-226hb" event={"ID":"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce","Type":"ContainerDied","Data":"2d9967418ed1582b15bfd89241e6e200734128faed4093efb58ac1f52c387e3d"} Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.364732 4729 generic.go:334] "Generic (PLEG): container finished" podID="7126250f-09fc-45d2-ba39-636094d89da7" containerID="52f60d42a7c562fa45bc4ab6b659c8a0829890da72d35bfa36ae4428cc59292d" exitCode=0 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.364790 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" event={"ID":"7126250f-09fc-45d2-ba39-636094d89da7","Type":"ContainerDied","Data":"52f60d42a7c562fa45bc4ab6b659c8a0829890da72d35bfa36ae4428cc59292d"} Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.366682 4729 generic.go:334] "Generic (PLEG): container finished" podID="ea0fb13c-3315-42a9-9fdc-14492e98546f" containerID="f974db68f0ca2dea7e3810d97492d147a0990c8ca4585933b6e57b7199982326" exitCode=0 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.366741 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-100f-account-create-update-l78l7" event={"ID":"ea0fb13c-3315-42a9-9fdc-14492e98546f","Type":"ContainerDied","Data":"f974db68f0ca2dea7e3810d97492d147a0990c8ca4585933b6e57b7199982326"} Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.370555 4729 generic.go:334] "Generic (PLEG): container finished" podID="b53e6829-f0c7-4c38-b258-42230df58947" containerID="eaffbc34719ccfbda5bcef814dcc2d99bb95da1c34b7996c6662d02e2a4efec3" exitCode=0 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.370768 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4af9-account-create-update-c7f5v" event={"ID":"b53e6829-f0c7-4c38-b258-42230df58947","Type":"ContainerDied","Data":"eaffbc34719ccfbda5bcef814dcc2d99bb95da1c34b7996c6662d02e2a4efec3"} Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.642099 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.642430 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-log" containerID="cri-o://7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610" gracePeriod=30 Jan 27 14:35:20 crc kubenswrapper[4729]: I0127 14:35:20.643057 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-httpd" containerID="cri-o://2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230" gracePeriod=30 Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.049659 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.160017 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts\") pod \"c205222a-ce44-4153-816d-edc3ec8f8240\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.160198 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdjtb\" (UniqueName: \"kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb\") pod \"c205222a-ce44-4153-816d-edc3ec8f8240\" (UID: \"c205222a-ce44-4153-816d-edc3ec8f8240\") " Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.160836 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c205222a-ce44-4153-816d-edc3ec8f8240" (UID: "c205222a-ce44-4153-816d-edc3ec8f8240"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.161375 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c205222a-ce44-4153-816d-edc3ec8f8240-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.166265 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb" (OuterVolumeSpecName: "kube-api-access-wdjtb") pod "c205222a-ce44-4153-816d-edc3ec8f8240" (UID: "c205222a-ce44-4153-816d-edc3ec8f8240"). InnerVolumeSpecName "kube-api-access-wdjtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.277766 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdjtb\" (UniqueName: \"kubernetes.io/projected/c205222a-ce44-4153-816d-edc3ec8f8240-kube-api-access-wdjtb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.384592 4729 generic.go:334] "Generic (PLEG): container finished" podID="a349bd01-6251-4026-b264-0d85efa07d09" containerID="7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610" exitCode=143 Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.384655 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerDied","Data":"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610"} Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.387732 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zvnjb" event={"ID":"c205222a-ce44-4153-816d-edc3ec8f8240","Type":"ContainerDied","Data":"36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1"} Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.387796 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36cc0278f6bb6292ac5efe771c9e9ea8778635053423a76a530d526aef8c67d1" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.387853 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zvnjb" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.882392 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.913396 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts\") pod \"8b863055-1278-4ddd-87fc-5eb337ec92b0\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.913474 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn66r\" (UniqueName: \"kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r\") pod \"8b863055-1278-4ddd-87fc-5eb337ec92b0\" (UID: \"8b863055-1278-4ddd-87fc-5eb337ec92b0\") " Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.915548 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b863055-1278-4ddd-87fc-5eb337ec92b0" (UID: "8b863055-1278-4ddd-87fc-5eb337ec92b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:21 crc kubenswrapper[4729]: I0127 14:35:21.938512 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r" (OuterVolumeSpecName: "kube-api-access-gn66r") pod "8b863055-1278-4ddd-87fc-5eb337ec92b0" (UID: "8b863055-1278-4ddd-87fc-5eb337ec92b0"). InnerVolumeSpecName "kube-api-access-gn66r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.024373 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b863055-1278-4ddd-87fc-5eb337ec92b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.024413 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn66r\" (UniqueName: \"kubernetes.io/projected/8b863055-1278-4ddd-87fc-5eb337ec92b0-kube-api-access-gn66r\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.348290 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.429463 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-226hb" event={"ID":"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce","Type":"ContainerDied","Data":"d7735991bba2dfa787fad71351f6016a02d56ce010db6e7f354974553b6c62cd"} Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.429504 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7735991bba2dfa787fad71351f6016a02d56ce010db6e7f354974553b6c62cd" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.429563 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-226hb" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.440108 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts\") pod \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.440186 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbs7m\" (UniqueName: \"kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m\") pod \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\" (UID: \"e28f95b9-03fc-42f0-aa6b-bee0ebbdefce\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.441369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-krsxm" event={"ID":"8b863055-1278-4ddd-87fc-5eb337ec92b0","Type":"ContainerDied","Data":"c3a9abd9d14da711b4d340f9990627fb99a6c68ed4ca865722760a86833c7e5f"} Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.441413 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a9abd9d14da711b4d340f9990627fb99a6c68ed4ca865722760a86833c7e5f" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.441490 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-krsxm" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.442958 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" (UID: "e28f95b9-03fc-42f0-aa6b-bee0ebbdefce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.448247 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m" (OuterVolumeSpecName: "kube-api-access-zbs7m") pod "e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" (UID: "e28f95b9-03fc-42f0-aa6b-bee0ebbdefce"). InnerVolumeSpecName "kube-api-access-zbs7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.519498 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.542685 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.542716 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbs7m\" (UniqueName: \"kubernetes.io/projected/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce-kube-api-access-zbs7m\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.546173 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.553151 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.643893 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts\") pod \"ea0fb13c-3315-42a9-9fdc-14492e98546f\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.644016 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts\") pod \"b53e6829-f0c7-4c38-b258-42230df58947\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.644112 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9qs\" (UniqueName: \"kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs\") pod \"7126250f-09fc-45d2-ba39-636094d89da7\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.644197 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6bqk\" (UniqueName: \"kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk\") pod \"b53e6829-f0c7-4c38-b258-42230df58947\" (UID: \"b53e6829-f0c7-4c38-b258-42230df58947\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.644226 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9m4x4\" (UniqueName: \"kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4\") pod \"ea0fb13c-3315-42a9-9fdc-14492e98546f\" (UID: \"ea0fb13c-3315-42a9-9fdc-14492e98546f\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.644321 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts\") pod \"7126250f-09fc-45d2-ba39-636094d89da7\" (UID: \"7126250f-09fc-45d2-ba39-636094d89da7\") " Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.645428 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7126250f-09fc-45d2-ba39-636094d89da7" (UID: "7126250f-09fc-45d2-ba39-636094d89da7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.645944 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea0fb13c-3315-42a9-9fdc-14492e98546f" (UID: "ea0fb13c-3315-42a9-9fdc-14492e98546f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.650471 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b53e6829-f0c7-4c38-b258-42230df58947" (UID: "b53e6829-f0c7-4c38-b258-42230df58947"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.650587 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk" (OuterVolumeSpecName: "kube-api-access-t6bqk") pod "b53e6829-f0c7-4c38-b258-42230df58947" (UID: "b53e6829-f0c7-4c38-b258-42230df58947"). InnerVolumeSpecName "kube-api-access-t6bqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.651629 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4" (OuterVolumeSpecName: "kube-api-access-9m4x4") pod "ea0fb13c-3315-42a9-9fdc-14492e98546f" (UID: "ea0fb13c-3315-42a9-9fdc-14492e98546f"). InnerVolumeSpecName "kube-api-access-9m4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.653123 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs" (OuterVolumeSpecName: "kube-api-access-zj9qs") pod "7126250f-09fc-45d2-ba39-636094d89da7" (UID: "7126250f-09fc-45d2-ba39-636094d89da7"). InnerVolumeSpecName "kube-api-access-zj9qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.747951 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea0fb13c-3315-42a9-9fdc-14492e98546f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.748005 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b53e6829-f0c7-4c38-b258-42230df58947-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.748018 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9qs\" (UniqueName: \"kubernetes.io/projected/7126250f-09fc-45d2-ba39-636094d89da7-kube-api-access-zj9qs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.748031 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6bqk\" (UniqueName: \"kubernetes.io/projected/b53e6829-f0c7-4c38-b258-42230df58947-kube-api-access-t6bqk\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.748042 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9m4x4\" (UniqueName: \"kubernetes.io/projected/ea0fb13c-3315-42a9-9fdc-14492e98546f-kube-api-access-9m4x4\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:22 crc kubenswrapper[4729]: I0127 14:35:22.748053 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7126250f-09fc-45d2-ba39-636094d89da7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.455965 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-100f-account-create-update-l78l7" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.455925 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-100f-account-create-update-l78l7" event={"ID":"ea0fb13c-3315-42a9-9fdc-14492e98546f","Type":"ContainerDied","Data":"08f64ca7f9fc72ca768b86671476eafa0e54e687fb07d6b8563d684493863102"} Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.456314 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f64ca7f9fc72ca768b86671476eafa0e54e687fb07d6b8563d684493863102" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.459809 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4af9-account-create-update-c7f5v" event={"ID":"b53e6829-f0c7-4c38-b258-42230df58947","Type":"ContainerDied","Data":"5f2d38ee0b9ec6b7b0e1e1ca731f3c5230f07b185d7bc6c5cb2d1a1e895ebb0c"} Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.459853 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f2d38ee0b9ec6b7b0e1e1ca731f3c5230f07b185d7bc6c5cb2d1a1e895ebb0c" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.459855 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4af9-account-create-update-c7f5v" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.462982 4729 generic.go:334] "Generic (PLEG): container finished" podID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerID="9e32dd7451d221d043d9724a8f9a6b7f7d9a0369c27ad7cc736ec0d9250da2f6" exitCode=0 Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.463022 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerDied","Data":"9e32dd7451d221d043d9724a8f9a6b7f7d9a0369c27ad7cc736ec0d9250da2f6"} Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.463078 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52","Type":"ContainerDied","Data":"c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d"} Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.463101 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3febaae700fa94bb6f0a0fcff925ae8b7b1abd57a355fec3ace41685f3cab1d" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.465420 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" event={"ID":"7126250f-09fc-45d2-ba39-636094d89da7","Type":"ContainerDied","Data":"9a782af4a1f5142b52acb050cde756859ed2f624739ada7a4d5d42a560e52638"} Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.465448 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a782af4a1f5142b52acb050cde756859ed2f624739ada7a4d5d42a560e52638" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.465502 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-996c-account-create-update-vkz4r" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.520661 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.670290 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.670711 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.670742 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.670800 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwfwr\" (UniqueName: \"kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.670943 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.671132 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.672019 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.672149 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data\") pod \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\" (UID: \"0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52\") " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.673382 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs" (OuterVolumeSpecName: "logs") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.673705 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.680395 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr" (OuterVolumeSpecName: "kube-api-access-vwfwr") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "kube-api-access-vwfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.701200 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts" (OuterVolumeSpecName: "scripts") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.719679 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb" (OuterVolumeSpecName: "glance") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "pvc-57455421-4417-4d20-8801-ff0bdce950eb". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.738001 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775384 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775426 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775437 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwfwr\" (UniqueName: \"kubernetes.io/projected/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-kube-api-access-vwfwr\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775446 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775453 4729 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.775484 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") on node \"crc\" " Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.784999 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.819335 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data" (OuterVolumeSpecName: "config-data") pod "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" (UID: "0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.871756 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.872084 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-57455421-4417-4d20-8801-ff0bdce950eb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb") on node "crc" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.877794 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.877843 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:23 crc kubenswrapper[4729]: I0127 14:35:23.877855 4729 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.432276 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.490701 4729 generic.go:334] "Generic (PLEG): container finished" podID="a349bd01-6251-4026-b264-0d85efa07d09" containerID="2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230" exitCode=0 Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.490797 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.491434 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.491603 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerDied","Data":"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230"} Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.491641 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a349bd01-6251-4026-b264-0d85efa07d09","Type":"ContainerDied","Data":"4ff08452eff672dbd795081df6b86e7adf636a5c02fa007484c01388b9253303"} Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.491657 4729 scope.go:117] "RemoveContainer" containerID="2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492146 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492188 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29hcd\" (UniqueName: \"kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492266 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492340 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492425 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492589 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492698 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.492803 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data\") pod \"a349bd01-6251-4026-b264-0d85efa07d09\" (UID: \"a349bd01-6251-4026-b264-0d85efa07d09\") " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.494210 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs" (OuterVolumeSpecName: "logs") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.496022 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.502775 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts" (OuterVolumeSpecName: "scripts") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.570223 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.572415 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd" (OuterVolumeSpecName: "kube-api-access-29hcd") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "kube-api-access-29hcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.583682 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.600128 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.600186 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29hcd\" (UniqueName: \"kubernetes.io/projected/a349bd01-6251-4026-b264-0d85efa07d09-kube-api-access-29hcd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.600202 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.600215 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.600226 4729 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a349bd01-6251-4026-b264-0d85efa07d09-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.620538 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12" (OuterVolumeSpecName: "glance") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.627746 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.650341 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data" (OuterVolumeSpecName: "config-data") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.655922 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656437 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656452 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656473 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b53e6829-f0c7-4c38-b258-42230df58947" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656481 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b53e6829-f0c7-4c38-b258-42230df58947" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656495 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656501 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656519 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7126250f-09fc-45d2-ba39-636094d89da7" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656525 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7126250f-09fc-45d2-ba39-636094d89da7" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656537 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656542 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656556 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea0fb13c-3315-42a9-9fdc-14492e98546f" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656584 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea0fb13c-3315-42a9-9fdc-14492e98546f" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656597 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656604 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656617 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656623 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656634 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b863055-1278-4ddd-87fc-5eb337ec92b0" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656640 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b863055-1278-4ddd-87fc-5eb337ec92b0" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.656655 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c205222a-ce44-4153-816d-edc3ec8f8240" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656661 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c205222a-ce44-4153-816d-edc3ec8f8240" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656851 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.656863 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea0fb13c-3315-42a9-9fdc-14492e98546f" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657018 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657033 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b53e6829-f0c7-4c38-b258-42230df58947" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657042 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657052 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" containerName="glance-log" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657060 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a349bd01-6251-4026-b264-0d85efa07d09" containerName="glance-httpd" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657073 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b863055-1278-4ddd-87fc-5eb337ec92b0" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657083 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="7126250f-09fc-45d2-ba39-636094d89da7" containerName="mariadb-account-create-update" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.657094 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c205222a-ce44-4153-816d-edc3ec8f8240" containerName="mariadb-database-create" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.658290 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.664909 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.665394 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.668820 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.697158 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a349bd01-6251-4026-b264-0d85efa07d09" (UID: "a349bd01-6251-4026-b264-0d85efa07d09"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.701718 4729 scope.go:117] "RemoveContainer" containerID="7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705370 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-scripts\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705435 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-logs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705486 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705581 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705609 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwckt\" (UniqueName: \"kubernetes.io/projected/2132f727-3016-42f6-ba30-864e70540513-kube-api-access-pwckt\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.705737 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.706160 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.706218 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-config-data\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.706378 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.706418 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") on node \"crc\" " Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.706434 4729 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a349bd01-6251-4026-b264-0d85efa07d09-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.739961 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.740128 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12") on node "crc" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.742076 4729 scope.go:117] "RemoveContainer" containerID="2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.742916 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230\": container with ID starting with 2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230 not found: ID does not exist" containerID="2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.742955 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230"} err="failed to get container status \"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230\": rpc error: code = NotFound desc = could not find container \"2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230\": container with ID starting with 2cce4e7b7e5df926e19b76ee64e49cd82c91da0a5e3a07d94daf0ee06bf9f230 not found: ID does not exist" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.742985 4729 scope.go:117] "RemoveContainer" containerID="7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610" Jan 27 14:35:24 crc kubenswrapper[4729]: E0127 14:35:24.743333 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610\": container with ID starting with 7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610 not found: ID does not exist" containerID="7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.743366 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610"} err="failed to get container status \"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610\": rpc error: code = NotFound desc = could not find container \"7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610\": container with ID starting with 7641f9c35cc6f30011881cf4c18ed3287d5e7546692c03662942d109307d7610 not found: ID does not exist" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808024 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808070 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-config-data\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-scripts\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808127 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-logs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808166 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808325 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwckt\" (UniqueName: \"kubernetes.io/projected/2132f727-3016-42f6-ba30-864e70540513-kube-api-access-pwckt\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.808490 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.809363 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-logs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.809793 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2132f727-3016-42f6-ba30-864e70540513-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.813824 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.813871 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/680b0567a76c6d555c8b8d8fdb3c5a567c93fb361c00ca1e51ff7c6b12fbca95/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.814349 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-config-data\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.821348 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.829076 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.832730 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2132f727-3016-42f6-ba30-864e70540513-scripts\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.840113 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwckt\" (UniqueName: \"kubernetes.io/projected/2132f727-3016-42f6-ba30-864e70540513-kube-api-access-pwckt\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.887151 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.900055 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.919503 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.925209 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.929421 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.929691 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.936308 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:24 crc kubenswrapper[4729]: I0127 14:35:24.945381 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-57455421-4417-4d20-8801-ff0bdce950eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-57455421-4417-4d20-8801-ff0bdce950eb\") pod \"glance-default-external-api-0\" (UID: \"2132f727-3016-42f6-ba30-864e70540513\") " pod="openstack/glance-default-external-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.005286 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.035788 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-logs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.035859 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.035987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.036019 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.036074 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.036187 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.036280 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.036428 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrnnt\" (UniqueName: \"kubernetes.io/projected/e371e969-2ec7-42fe-95bd-5765dc511224-kube-api-access-vrnnt\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.140185 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.140266 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.140594 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.140722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.141238 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrnnt\" (UniqueName: \"kubernetes.io/projected/e371e969-2ec7-42fe-95bd-5765dc511224-kube-api-access-vrnnt\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.141284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-logs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.141312 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.141654 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.142408 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.145572 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e371e969-2ec7-42fe-95bd-5765dc511224-logs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.153130 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.155496 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.161489 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.163985 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e371e969-2ec7-42fe-95bd-5765dc511224-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.174304 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.174365 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/16daece8afd0a3836c1be067eae3de4cdad06a409abefe2c3a15f5053658ec82/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.177223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrnnt\" (UniqueName: \"kubernetes.io/projected/e371e969-2ec7-42fe-95bd-5765dc511224-kube-api-access-vrnnt\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.339700 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.343055 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4aedcfa3-7f80-4ff9-8508-5d1b35c58d12\") pod \"glance-default-internal-api-0\" (UID: \"e371e969-2ec7-42fe-95bd-5765dc511224\") " pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.450847 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom\") pod \"4ac544ed-dbcb-46f4-9324-7000feda0230\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.451658 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7q9f\" (UniqueName: \"kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f\") pod \"4ac544ed-dbcb-46f4-9324-7000feda0230\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.451988 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data\") pod \"4ac544ed-dbcb-46f4-9324-7000feda0230\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.452028 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle\") pod \"4ac544ed-dbcb-46f4-9324-7000feda0230\" (UID: \"4ac544ed-dbcb-46f4-9324-7000feda0230\") " Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.460836 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ac544ed-dbcb-46f4-9324-7000feda0230" (UID: "4ac544ed-dbcb-46f4-9324-7000feda0230"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.470750 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f" (OuterVolumeSpecName: "kube-api-access-t7q9f") pod "4ac544ed-dbcb-46f4-9324-7000feda0230" (UID: "4ac544ed-dbcb-46f4-9324-7000feda0230"). InnerVolumeSpecName "kube-api-access-t7q9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.494513 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ac544ed-dbcb-46f4-9324-7000feda0230" (UID: "4ac544ed-dbcb-46f4-9324-7000feda0230"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.507833 4729 generic.go:334] "Generic (PLEG): container finished" podID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" exitCode=0 Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.507955 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6fb876b-qnrdg" event={"ID":"4ac544ed-dbcb-46f4-9324-7000feda0230","Type":"ContainerDied","Data":"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8"} Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.507987 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6cf6fb876b-qnrdg" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.508171 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6cf6fb876b-qnrdg" event={"ID":"4ac544ed-dbcb-46f4-9324-7000feda0230","Type":"ContainerDied","Data":"d5dbb1a9819c5483ae035861b7ae4a9775b8daa8f5b6c9fd4e9a5fbad4315ae1"} Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.508275 4729 scope.go:117] "RemoveContainer" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.553748 4729 scope.go:117] "RemoveContainer" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" Jan 27 14:35:25 crc kubenswrapper[4729]: E0127 14:35:25.555289 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8\": container with ID starting with 17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8 not found: ID does not exist" containerID="17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.555323 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8"} err="failed to get container status \"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8\": rpc error: code = NotFound desc = could not find container \"17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8\": container with ID starting with 17b11bd050e7f72ee7e3e96a6f2ac5c7208190f54a57751ba534da62747b1ed8 not found: ID does not exist" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.556557 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.556606 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7q9f\" (UniqueName: \"kubernetes.io/projected/4ac544ed-dbcb-46f4-9324-7000feda0230-kube-api-access-t7q9f\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.556622 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.563541 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.566126 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data" (OuterVolumeSpecName: "config-data") pod "4ac544ed-dbcb-46f4-9324-7000feda0230" (UID: "4ac544ed-dbcb-46f4-9324-7000feda0230"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.662062 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac544ed-dbcb-46f4-9324-7000feda0230-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.838411 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.894932 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:35:25 crc kubenswrapper[4729]: I0127 14:35:25.915886 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6cf6fb876b-qnrdg"] Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.074268 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52" path="/var/lib/kubelet/pods/0885ef7c-7d5a-4f4a-80b1-5d0c1d6b7e52/volumes" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.079117 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" path="/var/lib/kubelet/pods/4ac544ed-dbcb-46f4-9324-7000feda0230/volumes" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.083145 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a349bd01-6251-4026-b264-0d85efa07d09" path="/var/lib/kubelet/pods/a349bd01-6251-4026-b264-0d85efa07d09/volumes" Jan 27 14:35:26 crc kubenswrapper[4729]: E0127 14:35:26.128264 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac544ed_dbcb_46f4_9324_7000feda0230.slice/crio-d5dbb1a9819c5483ae035861b7ae4a9775b8daa8f5b6c9fd4e9a5fbad4315ae1\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac544ed_dbcb_46f4_9324_7000feda0230.slice\": RecentStats: unable to find data in memory cache]" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.154847 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.526960 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2132f727-3016-42f6-ba30-864e70540513","Type":"ContainerStarted","Data":"132a578761ded50cb1670aef570d3754c6acc6b03cfd0b9f0d1bf314de014a07"} Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.529316 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e371e969-2ec7-42fe-95bd-5765dc511224","Type":"ContainerStarted","Data":"4281bd9a9cbd830d20b3f78661b6c2e62d8d67f9d8ae70e0f437187dc36a501e"} Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.900707 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7xp9"] Jan 27 14:35:26 crc kubenswrapper[4729]: E0127 14:35:26.901327 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerName="heat-engine" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.901352 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerName="heat-engine" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.901657 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac544ed-dbcb-46f4-9324-7000feda0230" containerName="heat-engine" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.902522 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.922442 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.922596 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.922632 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t9xxw" Jan 27 14:35:26 crc kubenswrapper[4729]: I0127 14:35:26.939894 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7xp9"] Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.001222 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.001454 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqh7r\" (UniqueName: \"kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.001497 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.001515 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.102257 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqh7r\" (UniqueName: \"kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.102327 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.102348 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.102441 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.115463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.117522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.165848 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.174086 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqh7r\" (UniqueName: \"kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r\") pod \"nova-cell0-conductor-db-sync-x7xp9\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.256689 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:27 crc kubenswrapper[4729]: I0127 14:35:27.922396 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7xp9"] Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.568851 4729 generic.go:334] "Generic (PLEG): container finished" podID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerID="7f5fe8d125da0eb36da1b5c7ea0ecd3498d08ce7797295ef7c2ca3faccecffd7" exitCode=0 Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.568919 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerDied","Data":"7f5fe8d125da0eb36da1b5c7ea0ecd3498d08ce7797295ef7c2ca3faccecffd7"} Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.572602 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2132f727-3016-42f6-ba30-864e70540513","Type":"ContainerStarted","Data":"aca5c241be4ee670f1ca30d2820c51674d88d2816bf83de5028f8139230ec90c"} Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.575707 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e371e969-2ec7-42fe-95bd-5765dc511224","Type":"ContainerStarted","Data":"0ab0bab891c4b490af475b10b5d3598189b6a4bac65635f90adebeda643f528d"} Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.578459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" event={"ID":"2f535d0a-0620-44b3-84ba-c2119fa90330","Type":"ContainerStarted","Data":"b5278397dba4128e0bf00ac66b3be49dcb066c3ac2849391eb9719d86e61e7b0"} Jan 27 14:35:28 crc kubenswrapper[4729]: I0127 14:35:28.989364 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086318 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j96\" (UniqueName: \"kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086414 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086586 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086645 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086680 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086711 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.086760 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data\") pod \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\" (UID: \"14d3de75-2a85-4dc5-8d62-9febba3f31a7\") " Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.091383 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.091762 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.099702 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts" (OuterVolumeSpecName: "scripts") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.107262 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96" (OuterVolumeSpecName: "kube-api-access-f5j96") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "kube-api-access-f5j96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.183378 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.190071 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.190276 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j96\" (UniqueName: \"kubernetes.io/projected/14d3de75-2a85-4dc5-8d62-9febba3f31a7-kube-api-access-f5j96\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.190363 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.190425 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.190475 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14d3de75-2a85-4dc5-8d62-9febba3f31a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.290967 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.293079 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.355110 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data" (OuterVolumeSpecName: "config-data") pod "14d3de75-2a85-4dc5-8d62-9febba3f31a7" (UID: "14d3de75-2a85-4dc5-8d62-9febba3f31a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.405970 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14d3de75-2a85-4dc5-8d62-9febba3f31a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.600132 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14d3de75-2a85-4dc5-8d62-9febba3f31a7","Type":"ContainerDied","Data":"d5dfab7a5b44a0216db16f9ba61d972a93bb9420dcb73fb7871663b4fe70e28b"} Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.600218 4729 scope.go:117] "RemoveContainer" containerID="6ca5b5aff42dac44bee276ec172356847739391d952fbf43e8cd22ece5591f44" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.600415 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.602951 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2132f727-3016-42f6-ba30-864e70540513","Type":"ContainerStarted","Data":"e9231780ee27d1c6ae218f6b906fe34417318233798008e28e1fce4f26de2147"} Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.607209 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e371e969-2ec7-42fe-95bd-5765dc511224","Type":"ContainerStarted","Data":"a2fea396e3e88240f8b8d44273dcc31dceb88e2b63326df431d28bf4f0ff081e"} Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.634330 4729 scope.go:117] "RemoveContainer" containerID="9330d68680f55ea82cd15e35dbf714d296fd99d013521785f91324046608d215" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.654064 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.654038466 podStartE2EDuration="5.654038466s" podCreationTimestamp="2026-01-27 14:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:29.634849016 +0000 UTC m=+1816.219040020" watchObservedRunningTime="2026-01-27 14:35:29.654038466 +0000 UTC m=+1816.238229490" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.679459 4729 scope.go:117] "RemoveContainer" containerID="6ecd3aa377ff9afe8e2293dccf408bf1792017ea358a6a751dd5e4451bc1e069" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.691102 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.703921 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.718689 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.718671533 podStartE2EDuration="5.718671533s" podCreationTimestamp="2026-01-27 14:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:35:29.670154912 +0000 UTC m=+1816.254345946" watchObservedRunningTime="2026-01-27 14:35:29.718671533 +0000 UTC m=+1816.302862537" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.721252 4729 scope.go:117] "RemoveContainer" containerID="7f5fe8d125da0eb36da1b5c7ea0ecd3498d08ce7797295ef7c2ca3faccecffd7" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.765296 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:29 crc kubenswrapper[4729]: E0127 14:35:29.765904 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="proxy-httpd" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.765925 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="proxy-httpd" Jan 27 14:35:29 crc kubenswrapper[4729]: E0127 14:35:29.765940 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-central-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.765977 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-central-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: E0127 14:35:29.766008 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="sg-core" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766017 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="sg-core" Jan 27 14:35:29 crc kubenswrapper[4729]: E0127 14:35:29.766063 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-notification-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766073 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-notification-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766399 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-notification-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766421 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="sg-core" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766443 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="ceilometer-central-agent" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.766464 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" containerName="proxy-httpd" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.776472 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.782303 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.782896 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.801678 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918361 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918484 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918540 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918592 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918606 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918629 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l444\" (UniqueName: \"kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:29 crc kubenswrapper[4729]: I0127 14:35:29.918684 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.022677 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.022865 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.022923 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.022953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l444\" (UniqueName: \"kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.023026 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.023318 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.023491 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.024609 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.028568 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.030522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.031813 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.033474 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.042094 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.046688 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l444\" (UniqueName: \"kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444\") pod \"ceilometer-0\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.079304 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14d3de75-2a85-4dc5-8d62-9febba3f31a7" path="/var/lib/kubelet/pods/14d3de75-2a85-4dc5-8d62-9febba3f31a7/volumes" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.105798 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:30 crc kubenswrapper[4729]: I0127 14:35:30.618849 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:30 crc kubenswrapper[4729]: W0127 14:35:30.625901 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd72be1b4_e7f1_47e9_9916_a0c37435b3c9.slice/crio-ab7183e1ec4d2aaf8adbf353f7323bfb7338826a5e5a8ebbbbf38c62008febf7 WatchSource:0}: Error finding container ab7183e1ec4d2aaf8adbf353f7323bfb7338826a5e5a8ebbbbf38c62008febf7: Status 404 returned error can't find the container with id ab7183e1ec4d2aaf8adbf353f7323bfb7338826a5e5a8ebbbbf38c62008febf7 Jan 27 14:35:31 crc kubenswrapper[4729]: I0127 14:35:31.648355 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerStarted","Data":"ab7183e1ec4d2aaf8adbf353f7323bfb7338826a5e5a8ebbbbf38c62008febf7"} Jan 27 14:35:32 crc kubenswrapper[4729]: I0127 14:35:32.662148 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerStarted","Data":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.006362 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.006764 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.052543 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.063237 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.076101 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.565596 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.565662 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.619871 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.625521 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.701939 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.702191 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.702275 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:35 crc kubenswrapper[4729]: I0127 14:35:35.702352 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 27 14:35:38 crc kubenswrapper[4729]: I0127 14:35:38.735936 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698"} Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.305320 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.305975 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.316317 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.316437 4729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.328917 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.399694 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.762719 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerStarted","Data":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.765264 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" event={"ID":"2f535d0a-0620-44b3-84ba-c2119fa90330","Type":"ContainerStarted","Data":"88e37e5a6b9ea8a0931cf632b3a09663b478e2320c3a80caded8b2847f21e5c9"} Jan 27 14:35:40 crc kubenswrapper[4729]: I0127 14:35:40.782173 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" podStartSLOduration=2.551779161 podStartE2EDuration="14.782154861s" podCreationTimestamp="2026-01-27 14:35:26 +0000 UTC" firstStartedPulling="2026-01-27 14:35:27.934202405 +0000 UTC m=+1814.518393409" lastFinishedPulling="2026-01-27 14:35:40.164578115 +0000 UTC m=+1826.748769109" observedRunningTime="2026-01-27 14:35:40.780184866 +0000 UTC m=+1827.364375890" watchObservedRunningTime="2026-01-27 14:35:40.782154861 +0000 UTC m=+1827.366345885" Jan 27 14:35:41 crc kubenswrapper[4729]: I0127 14:35:41.787014 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerStarted","Data":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} Jan 27 14:35:44 crc kubenswrapper[4729]: I0127 14:35:44.823024 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerStarted","Data":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} Jan 27 14:35:44 crc kubenswrapper[4729]: I0127 14:35:44.823804 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.089095 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.948082615 podStartE2EDuration="17.089068523s" podCreationTimestamp="2026-01-27 14:35:29 +0000 UTC" firstStartedPulling="2026-01-27 14:35:30.6303172 +0000 UTC m=+1817.214508214" lastFinishedPulling="2026-01-27 14:35:43.771303118 +0000 UTC m=+1830.355494122" observedRunningTime="2026-01-27 14:35:44.865397999 +0000 UTC m=+1831.449589013" watchObservedRunningTime="2026-01-27 14:35:46.089068523 +0000 UTC m=+1832.673259527" Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.099639 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.848455 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-central-agent" containerID="cri-o://d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" gracePeriod=30 Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.848910 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="sg-core" containerID="cri-o://e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" gracePeriod=30 Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.848938 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-notification-agent" containerID="cri-o://61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" gracePeriod=30 Jan 27 14:35:46 crc kubenswrapper[4729]: I0127 14:35:46.849031 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="proxy-httpd" containerID="cri-o://4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" gracePeriod=30 Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.761611 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.801843 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.801947 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.802063 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.802922 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l444\" (UniqueName: \"kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.803228 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.803278 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.803308 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts\") pod \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\" (UID: \"d72be1b4-e7f1-47e9-9916-a0c37435b3c9\") " Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.804234 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.804462 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.805985 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.810723 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444" (OuterVolumeSpecName: "kube-api-access-9l444") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "kube-api-access-9l444". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.810872 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts" (OuterVolumeSpecName: "scripts") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.845317 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861484 4729 generic.go:334] "Generic (PLEG): container finished" podID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" exitCode=0 Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861526 4729 generic.go:334] "Generic (PLEG): container finished" podID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" exitCode=2 Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861536 4729 generic.go:334] "Generic (PLEG): container finished" podID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" exitCode=0 Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861547 4729 generic.go:334] "Generic (PLEG): container finished" podID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" exitCode=0 Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861568 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861572 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerDied","Data":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861698 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerDied","Data":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861716 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerDied","Data":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861728 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerDied","Data":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861739 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d72be1b4-e7f1-47e9-9916-a0c37435b3c9","Type":"ContainerDied","Data":"ab7183e1ec4d2aaf8adbf353f7323bfb7338826a5e5a8ebbbbf38c62008febf7"} Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.861759 4729 scope.go:117] "RemoveContainer" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.889479 4729 scope.go:117] "RemoveContainer" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.906319 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l444\" (UniqueName: \"kubernetes.io/projected/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-kube-api-access-9l444\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.906362 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.906377 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.906390 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.909421 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.929232 4729 scope.go:117] "RemoveContainer" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.944153 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data" (OuterVolumeSpecName: "config-data") pod "d72be1b4-e7f1-47e9-9916-a0c37435b3c9" (UID: "d72be1b4-e7f1-47e9-9916-a0c37435b3c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:47 crc kubenswrapper[4729]: I0127 14:35:47.969926 4729 scope.go:117] "RemoveContainer" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.008338 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.008382 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d72be1b4-e7f1-47e9-9916-a0c37435b3c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.065045 4729 scope.go:117] "RemoveContainer" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.065554 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": container with ID starting with 4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b not found: ID does not exist" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.065586 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} err="failed to get container status \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": rpc error: code = NotFound desc = could not find container \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": container with ID starting with 4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.065605 4729 scope.go:117] "RemoveContainer" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.065938 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": container with ID starting with e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247 not found: ID does not exist" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.065975 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} err="failed to get container status \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": rpc error: code = NotFound desc = could not find container \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": container with ID starting with e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.065988 4729 scope.go:117] "RemoveContainer" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.066295 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": container with ID starting with 61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c not found: ID does not exist" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.066332 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} err="failed to get container status \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": rpc error: code = NotFound desc = could not find container \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": container with ID starting with 61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.066346 4729 scope.go:117] "RemoveContainer" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.067310 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": container with ID starting with d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3 not found: ID does not exist" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.067339 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} err="failed to get container status \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": rpc error: code = NotFound desc = could not find container \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": container with ID starting with d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.067358 4729 scope.go:117] "RemoveContainer" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.067728 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} err="failed to get container status \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": rpc error: code = NotFound desc = could not find container \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": container with ID starting with 4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.067770 4729 scope.go:117] "RemoveContainer" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068099 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} err="failed to get container status \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": rpc error: code = NotFound desc = could not find container \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": container with ID starting with e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068121 4729 scope.go:117] "RemoveContainer" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068418 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} err="failed to get container status \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": rpc error: code = NotFound desc = could not find container \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": container with ID starting with 61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068439 4729 scope.go:117] "RemoveContainer" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068702 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} err="failed to get container status \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": rpc error: code = NotFound desc = could not find container \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": container with ID starting with d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.068722 4729 scope.go:117] "RemoveContainer" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069028 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} err="failed to get container status \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": rpc error: code = NotFound desc = could not find container \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": container with ID starting with 4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069080 4729 scope.go:117] "RemoveContainer" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069382 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} err="failed to get container status \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": rpc error: code = NotFound desc = could not find container \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": container with ID starting with e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069403 4729 scope.go:117] "RemoveContainer" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069794 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} err="failed to get container status \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": rpc error: code = NotFound desc = could not find container \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": container with ID starting with 61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.069814 4729 scope.go:117] "RemoveContainer" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070070 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} err="failed to get container status \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": rpc error: code = NotFound desc = could not find container \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": container with ID starting with d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070090 4729 scope.go:117] "RemoveContainer" containerID="4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070305 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b"} err="failed to get container status \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": rpc error: code = NotFound desc = could not find container \"4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b\": container with ID starting with 4080890cbcf12753d479366cadf3b1c74c27df9e92f78e7cb2cd3f865b12fc7b not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070327 4729 scope.go:117] "RemoveContainer" containerID="e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070528 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247"} err="failed to get container status \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": rpc error: code = NotFound desc = could not find container \"e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247\": container with ID starting with e70307b45f63d2b665f652698e8f4ec820d8dfdca37ac74b388a9bb110334247 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070551 4729 scope.go:117] "RemoveContainer" containerID="61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070754 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c"} err="failed to get container status \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": rpc error: code = NotFound desc = could not find container \"61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c\": container with ID starting with 61f16fe03fa0fadbd1e3a693a55d4af1f6c0f657b1587cf0eef5894a4b70407c not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.070775 4729 scope.go:117] "RemoveContainer" containerID="d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.071099 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3"} err="failed to get container status \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": rpc error: code = NotFound desc = could not find container \"d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3\": container with ID starting with d396f4dcc164efe11f5e04af34833eb87d9c981c9655b2b6a8f71645b0e42aa3 not found: ID does not exist" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.201335 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.209244 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.297397 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.298859 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-notification-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.298982 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-notification-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.299144 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="proxy-httpd" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.299598 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="proxy-httpd" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.299712 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="sg-core" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.299792 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="sg-core" Jan 27 14:35:48 crc kubenswrapper[4729]: E0127 14:35:48.299908 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-central-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.299997 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-central-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.300907 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="proxy-httpd" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.301006 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="sg-core" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.301119 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-notification-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.301214 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" containerName="ceilometer-central-agent" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.305781 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.309727 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.310236 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.320549 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.428638 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.428760 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.428828 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt9fc\" (UniqueName: \"kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.428904 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.428981 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.429039 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.429086 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531315 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531376 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt9fc\" (UniqueName: \"kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531438 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531522 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531570 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.531601 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.532391 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.532548 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.536213 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.536286 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.536740 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.537141 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.552348 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt9fc\" (UniqueName: \"kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc\") pod \"ceilometer-0\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " pod="openstack/ceilometer-0" Jan 27 14:35:48 crc kubenswrapper[4729]: I0127 14:35:48.629056 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:35:49 crc kubenswrapper[4729]: W0127 14:35:49.338986 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b3563d6_d2aa_4ca0_bd0d_d5a05031fdbc.slice/crio-3d1a4fe0859d07c2ed6caec9fc8774be5d77c977c02aa395e0b527a317035438 WatchSource:0}: Error finding container 3d1a4fe0859d07c2ed6caec9fc8774be5d77c977c02aa395e0b527a317035438: Status 404 returned error can't find the container with id 3d1a4fe0859d07c2ed6caec9fc8774be5d77c977c02aa395e0b527a317035438 Jan 27 14:35:49 crc kubenswrapper[4729]: I0127 14:35:49.348608 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:49 crc kubenswrapper[4729]: I0127 14:35:49.974377 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerStarted","Data":"3d1a4fe0859d07c2ed6caec9fc8774be5d77c977c02aa395e0b527a317035438"} Jan 27 14:35:50 crc kubenswrapper[4729]: I0127 14:35:50.072477 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d72be1b4-e7f1-47e9-9916-a0c37435b3c9" path="/var/lib/kubelet/pods/d72be1b4-e7f1-47e9-9916-a0c37435b3c9/volumes" Jan 27 14:35:50 crc kubenswrapper[4729]: I0127 14:35:50.993636 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerStarted","Data":"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245"} Jan 27 14:35:50 crc kubenswrapper[4729]: I0127 14:35:50.994215 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerStarted","Data":"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69"} Jan 27 14:35:52 crc kubenswrapper[4729]: I0127 14:35:52.005997 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerStarted","Data":"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7"} Jan 27 14:35:54 crc kubenswrapper[4729]: I0127 14:35:54.026684 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerStarted","Data":"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8"} Jan 27 14:35:54 crc kubenswrapper[4729]: I0127 14:35:54.027178 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:35:54 crc kubenswrapper[4729]: I0127 14:35:54.055161 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.090753006 podStartE2EDuration="6.055144008s" podCreationTimestamp="2026-01-27 14:35:48 +0000 UTC" firstStartedPulling="2026-01-27 14:35:49.345953283 +0000 UTC m=+1835.930144307" lastFinishedPulling="2026-01-27 14:35:53.310344305 +0000 UTC m=+1839.894535309" observedRunningTime="2026-01-27 14:35:54.044025571 +0000 UTC m=+1840.628216605" watchObservedRunningTime="2026-01-27 14:35:54.055144008 +0000 UTC m=+1840.639335012" Jan 27 14:35:58 crc kubenswrapper[4729]: I0127 14:35:58.082903 4729 generic.go:334] "Generic (PLEG): container finished" podID="2f535d0a-0620-44b3-84ba-c2119fa90330" containerID="88e37e5a6b9ea8a0931cf632b3a09663b478e2320c3a80caded8b2847f21e5c9" exitCode=0 Jan 27 14:35:58 crc kubenswrapper[4729]: I0127 14:35:58.082977 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" event={"ID":"2f535d0a-0620-44b3-84ba-c2119fa90330","Type":"ContainerDied","Data":"88e37e5a6b9ea8a0931cf632b3a09663b478e2320c3a80caded8b2847f21e5c9"} Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.493161 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.622099 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data\") pod \"2f535d0a-0620-44b3-84ba-c2119fa90330\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.622282 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle\") pod \"2f535d0a-0620-44b3-84ba-c2119fa90330\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.622301 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts\") pod \"2f535d0a-0620-44b3-84ba-c2119fa90330\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.622501 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqh7r\" (UniqueName: \"kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r\") pod \"2f535d0a-0620-44b3-84ba-c2119fa90330\" (UID: \"2f535d0a-0620-44b3-84ba-c2119fa90330\") " Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.629816 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r" (OuterVolumeSpecName: "kube-api-access-dqh7r") pod "2f535d0a-0620-44b3-84ba-c2119fa90330" (UID: "2f535d0a-0620-44b3-84ba-c2119fa90330"). InnerVolumeSpecName "kube-api-access-dqh7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.630376 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts" (OuterVolumeSpecName: "scripts") pod "2f535d0a-0620-44b3-84ba-c2119fa90330" (UID: "2f535d0a-0620-44b3-84ba-c2119fa90330"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.656102 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data" (OuterVolumeSpecName: "config-data") pod "2f535d0a-0620-44b3-84ba-c2119fa90330" (UID: "2f535d0a-0620-44b3-84ba-c2119fa90330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.666668 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f535d0a-0620-44b3-84ba-c2119fa90330" (UID: "2f535d0a-0620-44b3-84ba-c2119fa90330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.725304 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqh7r\" (UniqueName: \"kubernetes.io/projected/2f535d0a-0620-44b3-84ba-c2119fa90330-kube-api-access-dqh7r\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.725355 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.725372 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.725382 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f535d0a-0620-44b3-84ba-c2119fa90330-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.950686 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.951376 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-central-agent" containerID="cri-o://7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69" gracePeriod=30 Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.951454 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="proxy-httpd" containerID="cri-o://5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8" gracePeriod=30 Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.951465 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-notification-agent" containerID="cri-o://c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245" gracePeriod=30 Jan 27 14:35:59 crc kubenswrapper[4729]: I0127 14:35:59.951527 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="sg-core" containerID="cri-o://bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7" gracePeriod=30 Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.134833 4729 generic.go:334] "Generic (PLEG): container finished" podID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerID="5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8" exitCode=0 Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.134865 4729 generic.go:334] "Generic (PLEG): container finished" podID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerID="bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7" exitCode=2 Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.134975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerDied","Data":"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8"} Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.135009 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerDied","Data":"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7"} Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.140299 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" event={"ID":"2f535d0a-0620-44b3-84ba-c2119fa90330","Type":"ContainerDied","Data":"b5278397dba4128e0bf00ac66b3be49dcb066c3ac2849391eb9719d86e61e7b0"} Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.140336 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5278397dba4128e0bf00ac66b3be49dcb066c3ac2849391eb9719d86e61e7b0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.140387 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7xp9" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.223369 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:36:00 crc kubenswrapper[4729]: E0127 14:36:00.223987 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f535d0a-0620-44b3-84ba-c2119fa90330" containerName="nova-cell0-conductor-db-sync" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.224013 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f535d0a-0620-44b3-84ba-c2119fa90330" containerName="nova-cell0-conductor-db-sync" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.224253 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f535d0a-0620-44b3-84ba-c2119fa90330" containerName="nova-cell0-conductor-db-sync" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.225161 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.227516 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-t9xxw" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.238150 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.252271 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.339831 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.339927 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbzf\" (UniqueName: \"kubernetes.io/projected/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-kube-api-access-dwbzf\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.339965 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.442080 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.442148 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbzf\" (UniqueName: \"kubernetes.io/projected/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-kube-api-access-dwbzf\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.442178 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.448567 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.452492 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.479655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbzf\" (UniqueName: \"kubernetes.io/projected/76dbfd5b-82fa-4998-bd3b-6ead39c5f73b-kube-api-access-dwbzf\") pod \"nova-cell0-conductor-0\" (UID: \"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b\") " pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:00 crc kubenswrapper[4729]: I0127 14:36:00.567688 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.073827 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.158337 4729 generic.go:334] "Generic (PLEG): container finished" podID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerID="c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245" exitCode=0 Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.158429 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerDied","Data":"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245"} Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.160191 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b","Type":"ContainerStarted","Data":"2cb56ea480516a272aac12cde75b2d980a2c80892ed71f2226ad430b3edffbdb"} Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.711761 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.776933 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.776991 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt9fc\" (UniqueName: \"kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.777091 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.777120 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.777178 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.777248 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.777317 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd\") pod \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\" (UID: \"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc\") " Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.778912 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.779463 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.784029 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts" (OuterVolumeSpecName: "scripts") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.785585 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc" (OuterVolumeSpecName: "kube-api-access-dt9fc") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "kube-api-access-dt9fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.849786 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.880784 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.880826 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.880837 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.880848 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.880861 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt9fc\" (UniqueName: \"kubernetes.io/projected/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-kube-api-access-dt9fc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.883942 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.936977 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data" (OuterVolumeSpecName: "config-data") pod "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" (UID: "9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.983657 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:01 crc kubenswrapper[4729]: I0127 14:36:01.983701 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.183425 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76dbfd5b-82fa-4998-bd3b-6ead39c5f73b","Type":"ContainerStarted","Data":"6e8f4e7d2c1670f9a8f5fbde5c18ced4c2bc4354bc3477caca66dd76134260ec"} Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.189577 4729 generic.go:334] "Generic (PLEG): container finished" podID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerID="7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69" exitCode=0 Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.189753 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerDied","Data":"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69"} Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.190060 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc","Type":"ContainerDied","Data":"3d1a4fe0859d07c2ed6caec9fc8774be5d77c977c02aa395e0b527a317035438"} Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.190135 4729 scope.go:117] "RemoveContainer" containerID="5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.189895 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.211594 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.211573378 podStartE2EDuration="2.211573378s" podCreationTimestamp="2026-01-27 14:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:02.209255673 +0000 UTC m=+1848.793446687" watchObservedRunningTime="2026-01-27 14:36:02.211573378 +0000 UTC m=+1848.795764392" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.228422 4729 scope.go:117] "RemoveContainer" containerID="bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.240934 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.253993 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.281255 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.282461 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="sg-core" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.282494 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="sg-core" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.282531 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-central-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.282541 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-central-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.282590 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="proxy-httpd" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.282602 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="proxy-httpd" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.282643 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-notification-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.282651 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-notification-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.283267 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="proxy-httpd" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.283335 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-notification-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.283363 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="sg-core" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.283380 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" containerName="ceilometer-central-agent" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.283599 4729 scope.go:117] "RemoveContainer" containerID="c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.288999 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.295751 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.299809 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.308909 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.355641 4729 scope.go:117] "RemoveContainer" containerID="7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410427 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410532 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410559 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410597 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410745 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410802 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.410939 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75q6s\" (UniqueName: \"kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.438679 4729 scope.go:117] "RemoveContainer" containerID="5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.439261 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8\": container with ID starting with 5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8 not found: ID does not exist" containerID="5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.439308 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8"} err="failed to get container status \"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8\": rpc error: code = NotFound desc = could not find container \"5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8\": container with ID starting with 5526b41ae5b943a3250c20717e3d19855ed8aa5be693d9d35f0de6529845d6e8 not found: ID does not exist" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.439333 4729 scope.go:117] "RemoveContainer" containerID="bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.439710 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7\": container with ID starting with bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7 not found: ID does not exist" containerID="bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.439733 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7"} err="failed to get container status \"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7\": rpc error: code = NotFound desc = could not find container \"bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7\": container with ID starting with bb82624ec219c8f61a9595707509aa15136fe514adcb0bf9229a02b0f4a887b7 not found: ID does not exist" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.439750 4729 scope.go:117] "RemoveContainer" containerID="c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.440047 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245\": container with ID starting with c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245 not found: ID does not exist" containerID="c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.440080 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245"} err="failed to get container status \"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245\": rpc error: code = NotFound desc = could not find container \"c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245\": container with ID starting with c16f4332478a00f76f3d97b862627901450de922da74fe2b702cbdb501218245 not found: ID does not exist" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.440104 4729 scope.go:117] "RemoveContainer" containerID="7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69" Jan 27 14:36:02 crc kubenswrapper[4729]: E0127 14:36:02.440705 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69\": container with ID starting with 7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69 not found: ID does not exist" containerID="7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.440761 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69"} err="failed to get container status \"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69\": rpc error: code = NotFound desc = could not find container \"7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69\": container with ID starting with 7948b705a5b5533b364e54e7ff1c70e77233330233e6e6601342dd6865334e69 not found: ID does not exist" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.512684 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.512754 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.512781 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75q6s\" (UniqueName: \"kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.512924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.513005 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.513021 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.513050 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.513099 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.513645 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.517867 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.518487 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.518820 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.518971 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.538119 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75q6s\" (UniqueName: \"kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s\") pod \"ceilometer-0\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " pod="openstack/ceilometer-0" Jan 27 14:36:02 crc kubenswrapper[4729]: I0127 14:36:02.657754 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:03 crc kubenswrapper[4729]: I0127 14:36:03.169426 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:03 crc kubenswrapper[4729]: W0127 14:36:03.180825 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b06e113_f17c_4f2f_8b88_d7aa48cd5305.slice/crio-4564052fdcc1c6bf29c2ac426712f611104f01ebd95bdcef51852d09f5f5166a WatchSource:0}: Error finding container 4564052fdcc1c6bf29c2ac426712f611104f01ebd95bdcef51852d09f5f5166a: Status 404 returned error can't find the container with id 4564052fdcc1c6bf29c2ac426712f611104f01ebd95bdcef51852d09f5f5166a Jan 27 14:36:03 crc kubenswrapper[4729]: I0127 14:36:03.205520 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerStarted","Data":"4564052fdcc1c6bf29c2ac426712f611104f01ebd95bdcef51852d09f5f5166a"} Jan 27 14:36:03 crc kubenswrapper[4729]: I0127 14:36:03.205703 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.063310 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc" path="/var/lib/kubelet/pods/9b3563d6-d2aa-4ca0-bd0d-d5a05031fdbc/volumes" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.592580 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-xnfdh"] Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.594911 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.612888 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xnfdh"] Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.637958 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-026c-account-create-update-ljl8z"] Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.639964 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.649663 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.671478 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.671538 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfsf4\" (UniqueName: \"kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.671576 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.671632 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcj7h\" (UniqueName: \"kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.684422 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-026c-account-create-update-ljl8z"] Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.777931 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.778115 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfsf4\" (UniqueName: \"kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.778199 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.778326 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj7h\" (UniqueName: \"kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.779332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.780096 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.800691 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj7h\" (UniqueName: \"kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h\") pod \"aodh-026c-account-create-update-ljl8z\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.801063 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfsf4\" (UniqueName: \"kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4\") pod \"aodh-db-create-xnfdh\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.917733 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:04 crc kubenswrapper[4729]: I0127 14:36:04.971250 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:05 crc kubenswrapper[4729]: I0127 14:36:05.246561 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerStarted","Data":"2edd9c3e2692bc28207b7cc1145009a237a8ebca40e0e4cba2e0a8ecb75b8b9c"} Jan 27 14:36:05 crc kubenswrapper[4729]: I0127 14:36:05.459097 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-xnfdh"] Jan 27 14:36:05 crc kubenswrapper[4729]: I0127 14:36:05.574791 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-026c-account-create-update-ljl8z"] Jan 27 14:36:05 crc kubenswrapper[4729]: W0127 14:36:05.579474 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda985e50a_2e8c_4e6f_8fc5_a24c20cc7130.slice/crio-74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2 WatchSource:0}: Error finding container 74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2: Status 404 returned error can't find the container with id 74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2 Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.258739 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerStarted","Data":"a6a7dfcaa9a49c43f4e8f21affc94c14890bfad714d8b59425ae26896c649510"} Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.261585 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-026c-account-create-update-ljl8z" event={"ID":"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130","Type":"ContainerStarted","Data":"2bfee60ed64c068605898eb84da911a4917307db6f876ba08e2d13c1701e27a9"} Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.261635 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-026c-account-create-update-ljl8z" event={"ID":"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130","Type":"ContainerStarted","Data":"74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2"} Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.263837 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xnfdh" event={"ID":"987a64e5-85e5-4521-aa1a-8fb88f02e246","Type":"ContainerStarted","Data":"ef969febfa0aa9fdd00cd05a04646b84a79b0b63815490af282e0c0a08175ca7"} Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.263889 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xnfdh" event={"ID":"987a64e5-85e5-4521-aa1a-8fb88f02e246","Type":"ContainerStarted","Data":"52673dc31c0859b8b4bba4a13e9b3a86809bc95d32f471eda0208aae1aa55a44"} Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.283647 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-026c-account-create-update-ljl8z" podStartSLOduration=2.283620637 podStartE2EDuration="2.283620637s" podCreationTimestamp="2026-01-27 14:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:06.275986926 +0000 UTC m=+1852.860177930" watchObservedRunningTime="2026-01-27 14:36:06.283620637 +0000 UTC m=+1852.867811651" Jan 27 14:36:06 crc kubenswrapper[4729]: I0127 14:36:06.297723 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-xnfdh" podStartSLOduration=2.297702256 podStartE2EDuration="2.297702256s" podCreationTimestamp="2026-01-27 14:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:06.29602046 +0000 UTC m=+1852.880211464" watchObservedRunningTime="2026-01-27 14:36:06.297702256 +0000 UTC m=+1852.881893270" Jan 27 14:36:07 crc kubenswrapper[4729]: I0127 14:36:07.276967 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerStarted","Data":"8fa4f118ca4112cc333bea6d98e102dcc11b01b2564e3e31f690c9da0ef9e3ad"} Jan 27 14:36:07 crc kubenswrapper[4729]: I0127 14:36:07.280463 4729 generic.go:334] "Generic (PLEG): container finished" podID="a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" containerID="2bfee60ed64c068605898eb84da911a4917307db6f876ba08e2d13c1701e27a9" exitCode=0 Jan 27 14:36:07 crc kubenswrapper[4729]: I0127 14:36:07.280530 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-026c-account-create-update-ljl8z" event={"ID":"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130","Type":"ContainerDied","Data":"2bfee60ed64c068605898eb84da911a4917307db6f876ba08e2d13c1701e27a9"} Jan 27 14:36:07 crc kubenswrapper[4729]: I0127 14:36:07.286203 4729 generic.go:334] "Generic (PLEG): container finished" podID="987a64e5-85e5-4521-aa1a-8fb88f02e246" containerID="ef969febfa0aa9fdd00cd05a04646b84a79b0b63815490af282e0c0a08175ca7" exitCode=0 Jan 27 14:36:07 crc kubenswrapper[4729]: I0127 14:36:07.286247 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xnfdh" event={"ID":"987a64e5-85e5-4521-aa1a-8fb88f02e246","Type":"ContainerDied","Data":"ef969febfa0aa9fdd00cd05a04646b84a79b0b63815490af282e0c0a08175ca7"} Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.024411 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.049531 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.097704 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfsf4\" (UniqueName: \"kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4\") pod \"987a64e5-85e5-4521-aa1a-8fb88f02e246\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.097814 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcj7h\" (UniqueName: \"kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h\") pod \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.097961 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts\") pod \"987a64e5-85e5-4521-aa1a-8fb88f02e246\" (UID: \"987a64e5-85e5-4521-aa1a-8fb88f02e246\") " Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.098016 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts\") pod \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\" (UID: \"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130\") " Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.098597 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "987a64e5-85e5-4521-aa1a-8fb88f02e246" (UID: "987a64e5-85e5-4521-aa1a-8fb88f02e246"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.099330 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" (UID: "a985e50a-2e8c-4e6f-8fc5-a24c20cc7130"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.104833 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h" (OuterVolumeSpecName: "kube-api-access-dcj7h") pod "a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" (UID: "a985e50a-2e8c-4e6f-8fc5-a24c20cc7130"). InnerVolumeSpecName "kube-api-access-dcj7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.106229 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4" (OuterVolumeSpecName: "kube-api-access-tfsf4") pod "987a64e5-85e5-4521-aa1a-8fb88f02e246" (UID: "987a64e5-85e5-4521-aa1a-8fb88f02e246"). InnerVolumeSpecName "kube-api-access-tfsf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.201018 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/987a64e5-85e5-4521-aa1a-8fb88f02e246-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.201063 4729 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.201078 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfsf4\" (UniqueName: \"kubernetes.io/projected/987a64e5-85e5-4521-aa1a-8fb88f02e246-kube-api-access-tfsf4\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.201093 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcj7h\" (UniqueName: \"kubernetes.io/projected/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130-kube-api-access-dcj7h\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.321202 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerStarted","Data":"0afc1beb2808eb45248b268d780eba85d1634996c6da5a94b8cecf2d054695dc"} Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.321676 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.324715 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-026c-account-create-update-ljl8z" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.325047 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-026c-account-create-update-ljl8z" event={"ID":"a985e50a-2e8c-4e6f-8fc5-a24c20cc7130","Type":"ContainerDied","Data":"74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2"} Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.325089 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d46acef7371c8a7bfb941a6d8007d50ce7ccb4241f207a7cf48405e680a6d2" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.332812 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-xnfdh" event={"ID":"987a64e5-85e5-4521-aa1a-8fb88f02e246","Type":"ContainerDied","Data":"52673dc31c0859b8b4bba4a13e9b3a86809bc95d32f471eda0208aae1aa55a44"} Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.332862 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52673dc31c0859b8b4bba4a13e9b3a86809bc95d32f471eda0208aae1aa55a44" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.332945 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-xnfdh" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.375293 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.083128173 podStartE2EDuration="7.375272658s" podCreationTimestamp="2026-01-27 14:36:02 +0000 UTC" firstStartedPulling="2026-01-27 14:36:03.183339576 +0000 UTC m=+1849.767530580" lastFinishedPulling="2026-01-27 14:36:08.475484061 +0000 UTC m=+1855.059675065" observedRunningTime="2026-01-27 14:36:09.359585465 +0000 UTC m=+1855.943776469" watchObservedRunningTime="2026-01-27 14:36:09.375272658 +0000 UTC m=+1855.959463662" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.957303 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-rkrfd"] Jan 27 14:36:09 crc kubenswrapper[4729]: E0127 14:36:09.957852 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="987a64e5-85e5-4521-aa1a-8fb88f02e246" containerName="mariadb-database-create" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.957870 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="987a64e5-85e5-4521-aa1a-8fb88f02e246" containerName="mariadb-database-create" Jan 27 14:36:09 crc kubenswrapper[4729]: E0127 14:36:09.957899 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" containerName="mariadb-account-create-update" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.957905 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" containerName="mariadb-account-create-update" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.958156 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="987a64e5-85e5-4521-aa1a-8fb88f02e246" containerName="mariadb-database-create" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.958177 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" containerName="mariadb-account-create-update" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.958954 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.969044 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.969198 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zvnk7" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.969436 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.974458 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 14:36:09 crc kubenswrapper[4729]: I0127 14:36:09.991767 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rkrfd"] Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.023122 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.023460 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.023489 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghxk\" (UniqueName: \"kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.023620 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.125581 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.125656 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.125680 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghxk\" (UniqueName: \"kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.125778 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.131309 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.131447 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.135586 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.160788 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghxk\" (UniqueName: \"kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk\") pod \"aodh-db-sync-rkrfd\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.296076 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.609217 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 27 14:36:10 crc kubenswrapper[4729]: W0127 14:36:10.858735 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59bdaa5a_26c0_4a26_af8d_b6a4306f904c.slice/crio-d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe WatchSource:0}: Error finding container d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe: Status 404 returned error can't find the container with id d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe Jan 27 14:36:10 crc kubenswrapper[4729]: I0127 14:36:10.876400 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-rkrfd"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.212339 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8g82q"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.214576 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.217563 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.217729 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.242544 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8g82q"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.276610 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.276721 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.276906 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4spg6\" (UniqueName: \"kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.277103 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.357595 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rkrfd" event={"ID":"59bdaa5a-26c0-4a26-af8d-b6a4306f904c","Type":"ContainerStarted","Data":"d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe"} Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.380005 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.380104 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4spg6\" (UniqueName: \"kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.380181 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.380314 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.393714 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.394627 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.417277 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.422483 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4spg6\" (UniqueName: \"kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6\") pod \"nova-cell0-cell-mapping-8g82q\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.456036 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.458369 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.488086 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.492047 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.530975 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.533232 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.537823 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.548928 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.549425 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.597777 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.597931 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598106 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598178 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598224 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598260 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598383 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttck\" (UniqueName: \"kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.598425 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn7vb\" (UniqueName: \"kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716030 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716079 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716120 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716173 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sttck\" (UniqueName: \"kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716201 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn7vb\" (UniqueName: \"kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716291 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716337 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.716416 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.718403 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.719023 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.728502 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.743497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.749649 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.772499 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.797672 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttck\" (UniqueName: \"kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck\") pod \"nova-metadata-0\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " pod="openstack/nova-metadata-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.810447 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.812386 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.836277 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.850468 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn7vb\" (UniqueName: \"kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb\") pod \"nova-api-0\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.857284 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.859017 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.868526 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.869513 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.927408 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.927549 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.929043 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hgd5\" (UniqueName: \"kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:11 crc kubenswrapper[4729]: I0127 14:36:11.950116 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.015339 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.017985 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.035703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.035817 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.035862 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxlx\" (UniqueName: \"kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.035947 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.036088 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hgd5\" (UniqueName: \"kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.036141 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.090013 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hgd5\" (UniqueName: \"kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.118043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.139907 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.139967 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.139992 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140015 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140045 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxlx\" (UniqueName: \"kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140100 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140178 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862kg\" (UniqueName: \"kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.140268 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.143552 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.146165 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data\") pod \"nova-scheduler-0\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.153787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.177722 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxlx\" (UniqueName: \"kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx\") pod \"nova-cell1-novncproxy-0\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.244541 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.244953 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862kg\" (UniqueName: \"kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.245023 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.245088 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.245121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.245142 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.246241 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.246792 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.246809 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.247433 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.247557 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.294453 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862kg\" (UniqueName: \"kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg\") pod \"dnsmasq-dns-568d7fd7cf-fmwhh\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.339664 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.339720 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.462025 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.510611 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.592281 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.752686 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:12 crc kubenswrapper[4729]: I0127 14:36:12.820715 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8g82q"] Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.014743 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.368994 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:36:13 crc kubenswrapper[4729]: W0127 14:36:13.414168 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e7d1df_5e4c_4683_b45a_351aabeeb3bb.slice/crio-4bb175dcb08ca999e084bc009506592882aaa40f6e37680f5846ba4b6676cafa WatchSource:0}: Error finding container 4bb175dcb08ca999e084bc009506592882aaa40f6e37680f5846ba4b6676cafa: Status 404 returned error can't find the container with id 4bb175dcb08ca999e084bc009506592882aaa40f6e37680f5846ba4b6676cafa Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.460921 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerStarted","Data":"55b8a415e85079da8e5dd5083331155a31b08cf90b435377dbb5f331acb886f5"} Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.485313 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8g82q" event={"ID":"d33365ad-887a-4c00-83a7-419a7f002d92","Type":"ContainerStarted","Data":"20fd921f0234edb482b99a370d6e4a811e3a761c8c3af430d7abd784c2294c3d"} Jan 27 14:36:13 crc kubenswrapper[4729]: W0127 14:36:13.535368 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod112efdd5_3270_4415_805b_898634eebeb6.slice/crio-fa2549c5f25c495a67e3c31a166366273b6a04fc753c65f3f6836fb5469f60ce WatchSource:0}: Error finding container fa2549c5f25c495a67e3c31a166366273b6a04fc753c65f3f6836fb5469f60ce: Status 404 returned error can't find the container with id fa2549c5f25c495a67e3c31a166366273b6a04fc753c65f3f6836fb5469f60ce Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.555563 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.587815 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:36:13 crc kubenswrapper[4729]: I0127 14:36:13.934839 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.239489 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bkmjh"] Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.243146 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.249304 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.251055 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.273091 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.273908 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.274139 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwl2z\" (UniqueName: \"kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.274431 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.279653 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bkmjh"] Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.381979 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.382089 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.382114 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwl2z\" (UniqueName: \"kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.382579 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.387771 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.389860 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.401860 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.416138 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwl2z\" (UniqueName: \"kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z\") pod \"nova-cell1-conductor-db-sync-bkmjh\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.527832 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8g82q" event={"ID":"d33365ad-887a-4c00-83a7-419a7f002d92","Type":"ContainerStarted","Data":"b26bfeb0557ccd8edf7f132feb45fd03133c55bd1df947058871e9853008ed1b"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.552595 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"112efdd5-3270-4415-805b-898634eebeb6","Type":"ContainerStarted","Data":"fa2549c5f25c495a67e3c31a166366273b6a04fc753c65f3f6836fb5469f60ce"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.557995 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8g82q" podStartSLOduration=3.557976967 podStartE2EDuration="3.557976967s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:14.550649974 +0000 UTC m=+1861.134840988" watchObservedRunningTime="2026-01-27 14:36:14.557976967 +0000 UTC m=+1861.142167981" Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.560064 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerStarted","Data":"4bb175dcb08ca999e084bc009506592882aaa40f6e37680f5846ba4b6676cafa"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.563408 4729 generic.go:334] "Generic (PLEG): container finished" podID="c412d342-19d8-4203-9b84-a57c996cb21b" containerID="b157ad2d3b07f9cbfcea12e5f3a08b1b60ca941fdb5dd97f3662e9d9b98b7ac2" exitCode=0 Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.563488 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" event={"ID":"c412d342-19d8-4203-9b84-a57c996cb21b","Type":"ContainerDied","Data":"b157ad2d3b07f9cbfcea12e5f3a08b1b60ca941fdb5dd97f3662e9d9b98b7ac2"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.563707 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" event={"ID":"c412d342-19d8-4203-9b84-a57c996cb21b","Type":"ContainerStarted","Data":"c67b25e39eaaceeffe436b4fc7c13ebc68ff10fb3770e8641ff316f2128b9f74"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.567214 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c242cff-8f90-41ee-8c60-62710709cad9","Type":"ContainerStarted","Data":"bd3373cc97d3cbad33ab996436f7e46442493868bd9a5949b654e39d0e423d37"} Jan 27 14:36:14 crc kubenswrapper[4729]: I0127 14:36:14.654324 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.496904 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.534184 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bkmjh"] Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.584749 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.641390 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" event={"ID":"fac4d439-3f94-4fc7-bd2f-3b39c25a5987","Type":"ContainerStarted","Data":"c6037efdf77e4e3c2cfc421b7fb5035732166132f778d891571fc5bc8a7f6615"} Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.659983 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" event={"ID":"c412d342-19d8-4203-9b84-a57c996cb21b","Type":"ContainerStarted","Data":"d041c1cfb3c6745063ba7829a6344f90337f4e4cd012197dfd6e02498cd8e6b1"} Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.660425 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:15 crc kubenswrapper[4729]: I0127 14:36:15.687543 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" podStartSLOduration=4.687518358 podStartE2EDuration="4.687518358s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:15.680491443 +0000 UTC m=+1862.264682457" watchObservedRunningTime="2026-01-27 14:36:15.687518358 +0000 UTC m=+1862.271709372" Jan 27 14:36:16 crc kubenswrapper[4729]: I0127 14:36:16.676506 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" event={"ID":"fac4d439-3f94-4fc7-bd2f-3b39c25a5987","Type":"ContainerStarted","Data":"506100e1d54c9723e7b2132698324a6df8d2f923f8ac18d84bd20c225e57c48d"} Jan 27 14:36:16 crc kubenswrapper[4729]: I0127 14:36:16.700922 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" podStartSLOduration=2.7009001660000003 podStartE2EDuration="2.700900166s" podCreationTimestamp="2026-01-27 14:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:16.695087556 +0000 UTC m=+1863.279278570" watchObservedRunningTime="2026-01-27 14:36:16.700900166 +0000 UTC m=+1863.285091170" Jan 27 14:36:22 crc kubenswrapper[4729]: I0127 14:36:22.755848 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:36:22 crc kubenswrapper[4729]: I0127 14:36:22.822708 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:36:22 crc kubenswrapper[4729]: I0127 14:36:22.822947 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="dnsmasq-dns" containerID="cri-o://22437fc2564b3fb71032a61c7d785899d431364ba912efc3cc173c497092a868" gracePeriod=10 Jan 27 14:36:23 crc kubenswrapper[4729]: I0127 14:36:23.774664 4729 generic.go:334] "Generic (PLEG): container finished" podID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerID="22437fc2564b3fb71032a61c7d785899d431364ba912efc3cc173c497092a868" exitCode=0 Jan 27 14:36:23 crc kubenswrapper[4729]: I0127 14:36:23.775009 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" event={"ID":"b3bbc635-af3b-445b-9e20-e690038a4e6b","Type":"ContainerDied","Data":"22437fc2564b3fb71032a61c7d785899d431364ba912efc3cc173c497092a868"} Jan 27 14:36:23 crc kubenswrapper[4729]: I0127 14:36:23.778934 4729 generic.go:334] "Generic (PLEG): container finished" podID="d33365ad-887a-4c00-83a7-419a7f002d92" containerID="b26bfeb0557ccd8edf7f132feb45fd03133c55bd1df947058871e9853008ed1b" exitCode=0 Jan 27 14:36:23 crc kubenswrapper[4729]: I0127 14:36:23.778980 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8g82q" event={"ID":"d33365ad-887a-4c00-83a7-419a7f002d92","Type":"ContainerDied","Data":"b26bfeb0557ccd8edf7f132feb45fd03133c55bd1df947058871e9853008ed1b"} Jan 27 14:36:24 crc kubenswrapper[4729]: I0127 14:36:24.843348 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.213:5353: connect: connection refused" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.393723 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.435189 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data\") pod \"d33365ad-887a-4c00-83a7-419a7f002d92\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.435355 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spg6\" (UniqueName: \"kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6\") pod \"d33365ad-887a-4c00-83a7-419a7f002d92\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.435417 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle\") pod \"d33365ad-887a-4c00-83a7-419a7f002d92\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.435472 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts\") pod \"d33365ad-887a-4c00-83a7-419a7f002d92\" (UID: \"d33365ad-887a-4c00-83a7-419a7f002d92\") " Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.522362 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts" (OuterVolumeSpecName: "scripts") pod "d33365ad-887a-4c00-83a7-419a7f002d92" (UID: "d33365ad-887a-4c00-83a7-419a7f002d92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.522483 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6" (OuterVolumeSpecName: "kube-api-access-4spg6") pod "d33365ad-887a-4c00-83a7-419a7f002d92" (UID: "d33365ad-887a-4c00-83a7-419a7f002d92"). InnerVolumeSpecName "kube-api-access-4spg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.552625 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.552686 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4spg6\" (UniqueName: \"kubernetes.io/projected/d33365ad-887a-4c00-83a7-419a7f002d92-kube-api-access-4spg6\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.565756 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data" (OuterVolumeSpecName: "config-data") pod "d33365ad-887a-4c00-83a7-419a7f002d92" (UID: "d33365ad-887a-4c00-83a7-419a7f002d92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.572866 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d33365ad-887a-4c00-83a7-419a7f002d92" (UID: "d33365ad-887a-4c00-83a7-419a7f002d92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.655377 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.655416 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d33365ad-887a-4c00-83a7-419a7f002d92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.840994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8g82q" event={"ID":"d33365ad-887a-4c00-83a7-419a7f002d92","Type":"ContainerDied","Data":"20fd921f0234edb482b99a370d6e4a811e3a761c8c3af430d7abd784c2294c3d"} Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.841257 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fd921f0234edb482b99a370d6e4a811e3a761c8c3af430d7abd784c2294c3d" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.841143 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8g82q" Jan 27 14:36:25 crc kubenswrapper[4729]: I0127 14:36:25.987626 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:36:26 crc kubenswrapper[4729]: I0127 14:36:26.002371 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:36:27 crc kubenswrapper[4729]: I0127 14:36:27.831839 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:36:27 crc kubenswrapper[4729]: I0127 14:36:27.957324 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" event={"ID":"b3bbc635-af3b-445b-9e20-e690038a4e6b","Type":"ContainerDied","Data":"829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870"} Jan 27 14:36:27 crc kubenswrapper[4729]: I0127 14:36:27.957369 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829f0a9684834105ea4eb3a7e6fe9f95ad50a7daf592297df8cb8f13b6d55870" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.022609 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.113557 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.113670 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.113783 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swgj2\" (UniqueName: \"kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.113822 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.114028 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.114102 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb\") pod \"b3bbc635-af3b-445b-9e20-e690038a4e6b\" (UID: \"b3bbc635-af3b-445b-9e20-e690038a4e6b\") " Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.124315 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2" (OuterVolumeSpecName: "kube-api-access-swgj2") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "kube-api-access-swgj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.216699 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swgj2\" (UniqueName: \"kubernetes.io/projected/b3bbc635-af3b-445b-9e20-e690038a4e6b-kube-api-access-swgj2\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.370666 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.378114 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config" (OuterVolumeSpecName: "config") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.380538 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.387044 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.388065 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3bbc635-af3b-445b-9e20-e690038a4e6b" (UID: "b3bbc635-af3b-445b-9e20-e690038a4e6b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.420454 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.420702 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.420794 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.420952 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.421039 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3bbc635-af3b-445b-9e20-e690038a4e6b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.970845 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerStarted","Data":"c320f5e4171bfde4800e677a059ad36618d63f24632bff89159fe70b19697372"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.971675 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerStarted","Data":"eff650bff9b57d19e0b2374265daff0191406a667aa431bfe63838ba9839a386"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.971089 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-log" containerID="cri-o://eff650bff9b57d19e0b2374265daff0191406a667aa431bfe63838ba9839a386" gracePeriod=30 Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.972584 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-metadata" containerID="cri-o://c320f5e4171bfde4800e677a059ad36618d63f24632bff89159fe70b19697372" gracePeriod=30 Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.974981 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2c242cff-8f90-41ee-8c60-62710709cad9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://eeeae124e04392dab420089513dac9100188aa2eb024397f800be106d9e6e70e" gracePeriod=30 Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.975055 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c242cff-8f90-41ee-8c60-62710709cad9","Type":"ContainerStarted","Data":"eeeae124e04392dab420089513dac9100188aa2eb024397f800be106d9e6e70e"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.986411 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerStarted","Data":"363ff0ad06a2d6c7b2eb159b87d41bba96f7cf7c355c33bc23d9a38addeab9cc"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.986491 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerStarted","Data":"2a9b4e0146be64f0e0084035c096cee30272b6947fbd05a1324c8588f8d63dfe"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.986562 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-log" containerID="cri-o://2a9b4e0146be64f0e0084035c096cee30272b6947fbd05a1324c8588f8d63dfe" gracePeriod=30 Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.986583 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-api" containerID="cri-o://363ff0ad06a2d6c7b2eb159b87d41bba96f7cf7c355c33bc23d9a38addeab9cc" gracePeriod=30 Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.997170 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rkrfd" event={"ID":"59bdaa5a-26c0-4a26-af8d-b6a4306f904c","Type":"ContainerStarted","Data":"d08b1c16d070f6241a6ef717ac54d556587ae73ad61a485bc42e64857531401b"} Jan 27 14:36:28 crc kubenswrapper[4729]: I0127 14:36:28.999767 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ggrm2" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.001180 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="112efdd5-3270-4415-805b-898634eebeb6" containerName="nova-scheduler-scheduler" containerID="cri-o://5f5bbaf6967d1fd8d17bbcb7b9cfb3de39f2490d5ee0f165c1570f0b45b18c69" gracePeriod=30 Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.001548 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"112efdd5-3270-4415-805b-898634eebeb6","Type":"ContainerStarted","Data":"5f5bbaf6967d1fd8d17bbcb7b9cfb3de39f2490d5ee0f165c1570f0b45b18c69"} Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.009452 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.680787304 podStartE2EDuration="18.009427814s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="2026-01-27 14:36:13.437203628 +0000 UTC m=+1860.021394622" lastFinishedPulling="2026-01-27 14:36:25.765844128 +0000 UTC m=+1872.350035132" observedRunningTime="2026-01-27 14:36:28.994279914 +0000 UTC m=+1875.578470928" watchObservedRunningTime="2026-01-27 14:36:29.009427814 +0000 UTC m=+1875.593618828" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.021436 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.335231895 podStartE2EDuration="18.021412482s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="2026-01-27 14:36:13.068504434 +0000 UTC m=+1859.652695438" lastFinishedPulling="2026-01-27 14:36:25.754685021 +0000 UTC m=+1872.338876025" observedRunningTime="2026-01-27 14:36:29.016575307 +0000 UTC m=+1875.600766311" watchObservedRunningTime="2026-01-27 14:36:29.021412482 +0000 UTC m=+1875.605603496" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.067160 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=5.889723607 podStartE2EDuration="18.067133386s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="2026-01-27 14:36:13.588414259 +0000 UTC m=+1860.172605263" lastFinishedPulling="2026-01-27 14:36:25.765824038 +0000 UTC m=+1872.350015042" observedRunningTime="2026-01-27 14:36:29.048153249 +0000 UTC m=+1875.632344253" watchObservedRunningTime="2026-01-27 14:36:29.067133386 +0000 UTC m=+1875.651324390" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.076372 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.859849355 podStartE2EDuration="18.076354404s" podCreationTimestamp="2026-01-27 14:36:11 +0000 UTC" firstStartedPulling="2026-01-27 14:36:13.539170217 +0000 UTC m=+1860.123361221" lastFinishedPulling="2026-01-27 14:36:25.755675266 +0000 UTC m=+1872.339866270" observedRunningTime="2026-01-27 14:36:29.070189976 +0000 UTC m=+1875.654380980" watchObservedRunningTime="2026-01-27 14:36:29.076354404 +0000 UTC m=+1875.660545408" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.097039 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-rkrfd" podStartSLOduration=3.129276032 podStartE2EDuration="20.097018805s" podCreationTimestamp="2026-01-27 14:36:09 +0000 UTC" firstStartedPulling="2026-01-27 14:36:10.86144001 +0000 UTC m=+1857.445631014" lastFinishedPulling="2026-01-27 14:36:27.829182783 +0000 UTC m=+1874.413373787" observedRunningTime="2026-01-27 14:36:29.084409821 +0000 UTC m=+1875.668600835" watchObservedRunningTime="2026-01-27 14:36:29.097018805 +0000 UTC m=+1875.681209819" Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.130912 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:36:29 crc kubenswrapper[4729]: I0127 14:36:29.147680 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ggrm2"] Jan 27 14:36:30 crc kubenswrapper[4729]: I0127 14:36:30.011855 4729 generic.go:334] "Generic (PLEG): container finished" podID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerID="2a9b4e0146be64f0e0084035c096cee30272b6947fbd05a1324c8588f8d63dfe" exitCode=143 Jan 27 14:36:30 crc kubenswrapper[4729]: I0127 14:36:30.011918 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerDied","Data":"2a9b4e0146be64f0e0084035c096cee30272b6947fbd05a1324c8588f8d63dfe"} Jan 27 14:36:30 crc kubenswrapper[4729]: I0127 14:36:30.015186 4729 generic.go:334] "Generic (PLEG): container finished" podID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerID="eff650bff9b57d19e0b2374265daff0191406a667aa431bfe63838ba9839a386" exitCode=143 Jan 27 14:36:30 crc kubenswrapper[4729]: I0127 14:36:30.015271 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerDied","Data":"eff650bff9b57d19e0b2374265daff0191406a667aa431bfe63838ba9839a386"} Jan 27 14:36:30 crc kubenswrapper[4729]: I0127 14:36:30.071189 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" path="/var/lib/kubelet/pods/b3bbc635-af3b-445b-9e20-e690038a4e6b/volumes" Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.038938 4729 generic.go:334] "Generic (PLEG): container finished" podID="59bdaa5a-26c0-4a26-af8d-b6a4306f904c" containerID="d08b1c16d070f6241a6ef717ac54d556587ae73ad61a485bc42e64857531401b" exitCode=0 Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.039049 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rkrfd" event={"ID":"59bdaa5a-26c0-4a26-af8d-b6a4306f904c","Type":"ContainerDied","Data":"d08b1c16d070f6241a6ef717ac54d556587ae73ad61a485bc42e64857531401b"} Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.463407 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.463714 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.511339 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.592951 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:36:32 crc kubenswrapper[4729]: I0127 14:36:32.669125 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.527589 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.643236 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle\") pod \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.643397 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts\") pod \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.643462 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vghxk\" (UniqueName: \"kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk\") pod \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.643650 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data\") pod \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\" (UID: \"59bdaa5a-26c0-4a26-af8d-b6a4306f904c\") " Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.650925 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk" (OuterVolumeSpecName: "kube-api-access-vghxk") pod "59bdaa5a-26c0-4a26-af8d-b6a4306f904c" (UID: "59bdaa5a-26c0-4a26-af8d-b6a4306f904c"). InnerVolumeSpecName "kube-api-access-vghxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.653299 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts" (OuterVolumeSpecName: "scripts") pod "59bdaa5a-26c0-4a26-af8d-b6a4306f904c" (UID: "59bdaa5a-26c0-4a26-af8d-b6a4306f904c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.680204 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59bdaa5a-26c0-4a26-af8d-b6a4306f904c" (UID: "59bdaa5a-26c0-4a26-af8d-b6a4306f904c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.685348 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data" (OuterVolumeSpecName: "config-data") pod "59bdaa5a-26c0-4a26-af8d-b6a4306f904c" (UID: "59bdaa5a-26c0-4a26-af8d-b6a4306f904c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.746638 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.746677 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.746691 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:33 crc kubenswrapper[4729]: I0127 14:36:33.746701 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vghxk\" (UniqueName: \"kubernetes.io/projected/59bdaa5a-26c0-4a26-af8d-b6a4306f904c-kube-api-access-vghxk\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.064628 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-rkrfd" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.066675 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-rkrfd" event={"ID":"59bdaa5a-26c0-4a26-af8d-b6a4306f904c","Type":"ContainerDied","Data":"d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe"} Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.066718 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62824527522792383f9b27ae505c37821a17f579120b8c092cd37ea26ceb1fe" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.708230 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 14:36:34 crc kubenswrapper[4729]: E0127 14:36:34.708847 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="dnsmasq-dns" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.708863 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="dnsmasq-dns" Jan 27 14:36:34 crc kubenswrapper[4729]: E0127 14:36:34.708872 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33365ad-887a-4c00-83a7-419a7f002d92" containerName="nova-manage" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.708896 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33365ad-887a-4c00-83a7-419a7f002d92" containerName="nova-manage" Jan 27 14:36:34 crc kubenswrapper[4729]: E0127 14:36:34.708914 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="init" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.708920 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="init" Jan 27 14:36:34 crc kubenswrapper[4729]: E0127 14:36:34.708954 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bdaa5a-26c0-4a26-af8d-b6a4306f904c" containerName="aodh-db-sync" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.708961 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bdaa5a-26c0-4a26-af8d-b6a4306f904c" containerName="aodh-db-sync" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.709194 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bbc635-af3b-445b-9e20-e690038a4e6b" containerName="dnsmasq-dns" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.709224 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bdaa5a-26c0-4a26-af8d-b6a4306f904c" containerName="aodh-db-sync" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.709236 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33365ad-887a-4c00-83a7-419a7f002d92" containerName="nova-manage" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.711740 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.714124 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zvnk7" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.715193 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.715523 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.745695 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.871082 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.871162 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.871235 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7m4\" (UniqueName: \"kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.871285 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.973146 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.973554 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.974523 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7m4\" (UniqueName: \"kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.974735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.982095 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.983349 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.988277 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:34 crc kubenswrapper[4729]: I0127 14:36:34.996710 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7m4\" (UniqueName: \"kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4\") pod \"aodh-0\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " pod="openstack/aodh-0" Jan 27 14:36:35 crc kubenswrapper[4729]: I0127 14:36:35.034062 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:36:35 crc kubenswrapper[4729]: I0127 14:36:35.676723 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:36:36 crc kubenswrapper[4729]: I0127 14:36:36.085364 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerStarted","Data":"b949013be380241c91eb5799650e719d736a26751fddea350a8a7467033e1cb9"} Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.142515 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerStarted","Data":"df74511b53b53b407af2cb969fc0aa4f5a33635f629af82acc321a8618df665f"} Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.907004 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.907304 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-central-agent" containerID="cri-o://2edd9c3e2692bc28207b7cc1145009a237a8ebca40e0e4cba2e0a8ecb75b8b9c" gracePeriod=30 Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.907390 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-notification-agent" containerID="cri-o://a6a7dfcaa9a49c43f4e8f21affc94c14890bfad714d8b59425ae26896c649510" gracePeriod=30 Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.907383 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="proxy-httpd" containerID="cri-o://0afc1beb2808eb45248b268d780eba85d1634996c6da5a94b8cecf2d054695dc" gracePeriod=30 Jan 27 14:36:40 crc kubenswrapper[4729]: I0127 14:36:40.907534 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="sg-core" containerID="cri-o://8fa4f118ca4112cc333bea6d98e102dcc11b01b2564e3e31f690c9da0ef9e3ad" gracePeriod=30 Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.156803 4729 generic.go:334] "Generic (PLEG): container finished" podID="fac4d439-3f94-4fc7-bd2f-3b39c25a5987" containerID="506100e1d54c9723e7b2132698324a6df8d2f923f8ac18d84bd20c225e57c48d" exitCode=0 Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.157149 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" event={"ID":"fac4d439-3f94-4fc7-bd2f-3b39c25a5987","Type":"ContainerDied","Data":"506100e1d54c9723e7b2132698324a6df8d2f923f8ac18d84bd20c225e57c48d"} Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.160824 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerID="0afc1beb2808eb45248b268d780eba85d1634996c6da5a94b8cecf2d054695dc" exitCode=0 Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.160888 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerID="8fa4f118ca4112cc333bea6d98e102dcc11b01b2564e3e31f690c9da0ef9e3ad" exitCode=2 Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.160907 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerDied","Data":"0afc1beb2808eb45248b268d780eba85d1634996c6da5a94b8cecf2d054695dc"} Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.160951 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerDied","Data":"8fa4f118ca4112cc333bea6d98e102dcc11b01b2564e3e31f690c9da0ef9e3ad"} Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.585025 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.870631 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:36:41 crc kubenswrapper[4729]: I0127 14:36:41.870684 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:36:42 crc kubenswrapper[4729]: I0127 14:36:42.216559 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerID="2edd9c3e2692bc28207b7cc1145009a237a8ebca40e0e4cba2e0a8ecb75b8b9c" exitCode=0 Jan 27 14:36:42 crc kubenswrapper[4729]: I0127 14:36:42.217041 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerDied","Data":"2edd9c3e2692bc28207b7cc1145009a237a8ebca40e0e4cba2e0a8ecb75b8b9c"} Jan 27 14:36:42 crc kubenswrapper[4729]: I0127 14:36:42.944317 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.135199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwl2z\" (UniqueName: \"kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z\") pod \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.135248 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts\") pod \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.135266 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data\") pod \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.135395 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle\") pod \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\" (UID: \"fac4d439-3f94-4fc7-bd2f-3b39c25a5987\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.144147 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z" (OuterVolumeSpecName: "kube-api-access-zwl2z") pod "fac4d439-3f94-4fc7-bd2f-3b39c25a5987" (UID: "fac4d439-3f94-4fc7-bd2f-3b39c25a5987"). InnerVolumeSpecName "kube-api-access-zwl2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.210616 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts" (OuterVolumeSpecName: "scripts") pod "fac4d439-3f94-4fc7-bd2f-3b39c25a5987" (UID: "fac4d439-3f94-4fc7-bd2f-3b39c25a5987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.240653 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwl2z\" (UniqueName: \"kubernetes.io/projected/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-kube-api-access-zwl2z\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.240686 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.251570 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fac4d439-3f94-4fc7-bd2f-3b39c25a5987" (UID: "fac4d439-3f94-4fc7-bd2f-3b39c25a5987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.319348 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" event={"ID":"fac4d439-3f94-4fc7-bd2f-3b39c25a5987","Type":"ContainerDied","Data":"c6037efdf77e4e3c2cfc421b7fb5035732166132f778d891571fc5bc8a7f6615"} Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.319551 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6037efdf77e4e3c2cfc421b7fb5035732166132f778d891571fc5bc8a7f6615" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.320917 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bkmjh" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.336505 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data" (OuterVolumeSpecName: "config-data") pod "fac4d439-3f94-4fc7-bd2f-3b39c25a5987" (UID: "fac4d439-3f94-4fc7-bd2f-3b39c25a5987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.342636 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.342674 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac4d439-3f94-4fc7-bd2f-3b39c25a5987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.358242 4729 generic.go:334] "Generic (PLEG): container finished" podID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerID="a6a7dfcaa9a49c43f4e8f21affc94c14890bfad714d8b59425ae26896c649510" exitCode=0 Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.358285 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerDied","Data":"a6a7dfcaa9a49c43f4e8f21affc94c14890bfad714d8b59425ae26896c649510"} Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.365512 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:36:43 crc kubenswrapper[4729]: E0127 14:36:43.366246 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac4d439-3f94-4fc7-bd2f-3b39c25a5987" containerName="nova-cell1-conductor-db-sync" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.366265 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac4d439-3f94-4fc7-bd2f-3b39c25a5987" containerName="nova-cell1-conductor-db-sync" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.366521 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac4d439-3f94-4fc7-bd2f-3b39c25a5987" containerName="nova-cell1-conductor-db-sync" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.367521 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.401966 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.445944 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.446103 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqtjs\" (UniqueName: \"kubernetes.io/projected/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-kube-api-access-bqtjs\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.446283 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.503738 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.547855 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.547941 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqtjs\" (UniqueName: \"kubernetes.io/projected/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-kube-api-access-bqtjs\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.548024 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.554360 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.565756 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.567686 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqtjs\" (UniqueName: \"kubernetes.io/projected/e8c150bd-4541-46f0-8c70-1e5482e6b3f3-kube-api-access-bqtjs\") pod \"nova-cell1-conductor-0\" (UID: \"e8c150bd-4541-46f0-8c70-1e5482e6b3f3\") " pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.650321 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.650957 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.651320 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.651544 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.652096 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.652270 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75q6s\" (UniqueName: \"kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.652453 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.653199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.653575 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts\") pod \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\" (UID: \"3b06e113-f17c-4f2f-8b88-d7aa48cd5305\") " Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.656758 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.656784 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.674787 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts" (OuterVolumeSpecName: "scripts") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.674934 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s" (OuterVolumeSpecName: "kube-api-access-75q6s") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "kube-api-access-75q6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.701025 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.759535 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75q6s\" (UniqueName: \"kubernetes.io/projected/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-kube-api-access-75q6s\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.759611 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.759625 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.767172 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.773028 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.803697 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data" (OuterVolumeSpecName: "config-data") pod "3b06e113-f17c-4f2f-8b88-d7aa48cd5305" (UID: "3b06e113-f17c-4f2f-8b88-d7aa48cd5305"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.862297 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:43 crc kubenswrapper[4729]: I0127 14:36:43.862631 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b06e113-f17c-4f2f-8b88-d7aa48cd5305-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.153965 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.154696 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" containerName="kube-state-metrics" containerID="cri-o://5ac0e25c32c0ec47b5fe249423023e939778c9859012f75e6aa2e868488277b1" gracePeriod=30 Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.237301 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.240629 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="45271248-b433-4451-8ee0-30c55a37e285" containerName="mysqld-exporter" containerID="cri-o://f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f" gracePeriod=30 Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.332679 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: W0127 14:36:44.364051 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8c150bd_4541_46f0_8c70_1e5482e6b3f3.slice/crio-5ea2564f94f2a9becff7223903847285f0dbfc5f49c02999345979d31d5701e4 WatchSource:0}: Error finding container 5ea2564f94f2a9becff7223903847285f0dbfc5f49c02999345979d31d5701e4: Status 404 returned error can't find the container with id 5ea2564f94f2a9becff7223903847285f0dbfc5f49c02999345979d31d5701e4 Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.386221 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerStarted","Data":"0892bdf4ee459835dc4989233aabbbf43cf16f9e40c7040f2d59bd9df7ab4bdf"} Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.396837 4729 generic.go:334] "Generic (PLEG): container finished" podID="69bd8e93-2421-411f-ad18-0a92631e3345" containerID="5ac0e25c32c0ec47b5fe249423023e939778c9859012f75e6aa2e868488277b1" exitCode=2 Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.396924 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"69bd8e93-2421-411f-ad18-0a92631e3345","Type":"ContainerDied","Data":"5ac0e25c32c0ec47b5fe249423023e939778c9859012f75e6aa2e868488277b1"} Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.403308 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b06e113-f17c-4f2f-8b88-d7aa48cd5305","Type":"ContainerDied","Data":"4564052fdcc1c6bf29c2ac426712f611104f01ebd95bdcef51852d09f5f5166a"} Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.403356 4729 scope.go:117] "RemoveContainer" containerID="0afc1beb2808eb45248b268d780eba85d1634996c6da5a94b8cecf2d054695dc" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.403405 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.607249 4729 scope.go:117] "RemoveContainer" containerID="8fa4f118ca4112cc333bea6d98e102dcc11b01b2564e3e31f690c9da0ef9e3ad" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.618757 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.659057 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.680970 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: E0127 14:36:44.681569 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-notification-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681586 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-notification-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: E0127 14:36:44.681616 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="proxy-httpd" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681626 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="proxy-httpd" Jan 27 14:36:44 crc kubenswrapper[4729]: E0127 14:36:44.681639 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="sg-core" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681647 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="sg-core" Jan 27 14:36:44 crc kubenswrapper[4729]: E0127 14:36:44.681677 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-central-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681684 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-central-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681916 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-central-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681933 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="proxy-httpd" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681955 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="ceilometer-notification-agent" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.681973 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" containerName="sg-core" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.684176 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.701684 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.701896 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.703449 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.739395 4729 scope.go:117] "RemoveContainer" containerID="a6a7dfcaa9a49c43f4e8f21affc94c14890bfad714d8b59425ae26896c649510" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.801722 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.801847 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.801998 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.802102 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.802185 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.802238 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swlmm\" (UniqueName: \"kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.802404 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.830077 4729 scope.go:117] "RemoveContainer" containerID="2edd9c3e2692bc28207b7cc1145009a237a8ebca40e0e4cba2e0a8ecb75b8b9c" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924211 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924330 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924393 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924438 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924458 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swlmm\" (UniqueName: \"kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924558 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.924818 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.936074 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.936699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.941809 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.943145 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.945140 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.946706 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:44 crc kubenswrapper[4729]: I0127 14:36:44.983013 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swlmm\" (UniqueName: \"kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm\") pod \"ceilometer-0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " pod="openstack/ceilometer-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.045659 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.236860 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.246345 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.345940 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data\") pod \"45271248-b433-4451-8ee0-30c55a37e285\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.346135 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rvjh\" (UniqueName: \"kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh\") pod \"69bd8e93-2421-411f-ad18-0a92631e3345\" (UID: \"69bd8e93-2421-411f-ad18-0a92631e3345\") " Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.346249 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69rng\" (UniqueName: \"kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng\") pod \"45271248-b433-4451-8ee0-30c55a37e285\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.346347 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle\") pod \"45271248-b433-4451-8ee0-30c55a37e285\" (UID: \"45271248-b433-4451-8ee0-30c55a37e285\") " Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.357503 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh" (OuterVolumeSpecName: "kube-api-access-9rvjh") pod "69bd8e93-2421-411f-ad18-0a92631e3345" (UID: "69bd8e93-2421-411f-ad18-0a92631e3345"). InnerVolumeSpecName "kube-api-access-9rvjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.365412 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng" (OuterVolumeSpecName: "kube-api-access-69rng") pod "45271248-b433-4451-8ee0-30c55a37e285" (UID: "45271248-b433-4451-8ee0-30c55a37e285"). InnerVolumeSpecName "kube-api-access-69rng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.411539 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45271248-b433-4451-8ee0-30c55a37e285" (UID: "45271248-b433-4451-8ee0-30c55a37e285"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.451087 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rvjh\" (UniqueName: \"kubernetes.io/projected/69bd8e93-2421-411f-ad18-0a92631e3345-kube-api-access-9rvjh\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.451147 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69rng\" (UniqueName: \"kubernetes.io/projected/45271248-b433-4451-8ee0-30c55a37e285-kube-api-access-69rng\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.451163 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.455234 4729 generic.go:334] "Generic (PLEG): container finished" podID="45271248-b433-4451-8ee0-30c55a37e285" containerID="f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f" exitCode=2 Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.455503 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"45271248-b433-4451-8ee0-30c55a37e285","Type":"ContainerDied","Data":"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f"} Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.455557 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"45271248-b433-4451-8ee0-30c55a37e285","Type":"ContainerDied","Data":"a39ccbb18cf3effd188b3e195776b1e5ec3a7c05963d19294c2ce40e564593ed"} Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.455579 4729 scope.go:117] "RemoveContainer" containerID="f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.455601 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.475179 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e8c150bd-4541-46f0-8c70-1e5482e6b3f3","Type":"ContainerStarted","Data":"c0f5aa74418bf9166291579756edcba7d3534c6a938afb16465e02236f464bd1"} Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.475711 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e8c150bd-4541-46f0-8c70-1e5482e6b3f3","Type":"ContainerStarted","Data":"5ea2564f94f2a9becff7223903847285f0dbfc5f49c02999345979d31d5701e4"} Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.475995 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.481297 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data" (OuterVolumeSpecName: "config-data") pod "45271248-b433-4451-8ee0-30c55a37e285" (UID: "45271248-b433-4451-8ee0-30c55a37e285"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.491566 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"69bd8e93-2421-411f-ad18-0a92631e3345","Type":"ContainerDied","Data":"ef0db48f6e2af206d1598a828ed69320724b8f9c309b30bc5213f6ace67043ac"} Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.491666 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.517611 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.517589774 podStartE2EDuration="2.517589774s" podCreationTimestamp="2026-01-27 14:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:36:45.507398671 +0000 UTC m=+1892.091589685" watchObservedRunningTime="2026-01-27 14:36:45.517589774 +0000 UTC m=+1892.101780778" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.545350 4729 scope.go:117] "RemoveContainer" containerID="f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f" Jan 27 14:36:45 crc kubenswrapper[4729]: E0127 14:36:45.548801 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f\": container with ID starting with f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f not found: ID does not exist" containerID="f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.548856 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f"} err="failed to get container status \"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f\": rpc error: code = NotFound desc = could not find container \"f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f\": container with ID starting with f0cbf59553e0eccad49bc0cfd217e6b9094ab26b232c1f7935756535001d659f not found: ID does not exist" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.548980 4729 scope.go:117] "RemoveContainer" containerID="5ac0e25c32c0ec47b5fe249423023e939778c9859012f75e6aa2e868488277b1" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.560454 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45271248-b433-4451-8ee0-30c55a37e285-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.586522 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.614459 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.628358 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:45 crc kubenswrapper[4729]: E0127 14:36:45.629056 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45271248-b433-4451-8ee0-30c55a37e285" containerName="mysqld-exporter" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.629079 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="45271248-b433-4451-8ee0-30c55a37e285" containerName="mysqld-exporter" Jan 27 14:36:45 crc kubenswrapper[4729]: E0127 14:36:45.629151 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" containerName="kube-state-metrics" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.629160 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" containerName="kube-state-metrics" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.629443 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="45271248-b433-4451-8ee0-30c55a37e285" containerName="mysqld-exporter" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.629474 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" containerName="kube-state-metrics" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.630507 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.637088 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.638077 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.648799 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.765657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dwj\" (UniqueName: \"kubernetes.io/projected/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-api-access-77dwj\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.765869 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.766017 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.766115 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.869990 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.870087 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.870147 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.870526 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dwj\" (UniqueName: \"kubernetes.io/projected/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-api-access-77dwj\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.877588 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.878493 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.888714 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51fc79dc-e632-414e-a354-54c3bfd2eb8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.914731 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dwj\" (UniqueName: \"kubernetes.io/projected/51fc79dc-e632-414e-a354-54c3bfd2eb8d-kube-api-access-77dwj\") pod \"kube-state-metrics-0\" (UID: \"51fc79dc-e632-414e-a354-54c3bfd2eb8d\") " pod="openstack/kube-state-metrics-0" Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.941449 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:45 crc kubenswrapper[4729]: I0127 14:36:45.961035 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.069684 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b06e113-f17c-4f2f-8b88-d7aa48cd5305" path="/var/lib/kubelet/pods/3b06e113-f17c-4f2f-8b88-d7aa48cd5305/volumes" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.093925 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" path="/var/lib/kubelet/pods/69bd8e93-2421-411f-ad18-0a92631e3345/volumes" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.141371 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.194542 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.228374 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.248038 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.252578 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.252812 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.296320 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.296621 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2wd\" (UniqueName: \"kubernetes.io/projected/3c869358-ae88-4f4a-9317-4e1176fdb199-kube-api-access-qv2wd\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.296658 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-config-data\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.296814 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.302285 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.399302 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv2wd\" (UniqueName: \"kubernetes.io/projected/3c869358-ae88-4f4a-9317-4e1176fdb199-kube-api-access-qv2wd\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.399375 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-config-data\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.399414 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.399501 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.413763 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.417163 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.418555 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c869358-ae88-4f4a-9317-4e1176fdb199-config-data\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.429139 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv2wd\" (UniqueName: \"kubernetes.io/projected/3c869358-ae88-4f4a-9317-4e1176fdb199-kube-api-access-qv2wd\") pod \"mysqld-exporter-0\" (UID: \"3c869358-ae88-4f4a-9317-4e1176fdb199\") " pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.508634 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerStarted","Data":"75eb86fcb4f777b148e3d5f700f83e29f9deb31433ac6ee15375d15bb91de988"} Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.608510 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 27 14:36:46 crc kubenswrapper[4729]: I0127 14:36:46.705290 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 27 14:36:47 crc kubenswrapper[4729]: I0127 14:36:47.190323 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:36:48 crc kubenswrapper[4729]: I0127 14:36:48.067162 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45271248-b433-4451-8ee0-30c55a37e285" path="/var/lib/kubelet/pods/45271248-b433-4451-8ee0-30c55a37e285/volumes" Jan 27 14:36:48 crc kubenswrapper[4729]: I0127 14:36:48.406601 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 27 14:36:48 crc kubenswrapper[4729]: I0127 14:36:48.536029 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"51fc79dc-e632-414e-a354-54c3bfd2eb8d","Type":"ContainerStarted","Data":"c3b59385488ee1ea339435e8bc08b6fed84740ec9e65446dd7bf5826747f02ce"} Jan 27 14:36:48 crc kubenswrapper[4729]: I0127 14:36:48.537541 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3c869358-ae88-4f4a-9317-4e1176fdb199","Type":"ContainerStarted","Data":"78f177a94d8f596355b675ce3d913508fbec6448b37caa7b79b3f526d3bbc86d"} Jan 27 14:36:49 crc kubenswrapper[4729]: I0127 14:36:49.662220 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="69bd8e93-2421-411f-ad18-0a92631e3345" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.132:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:36:50 crc kubenswrapper[4729]: I0127 14:36:50.082166 4729 scope.go:117] "RemoveContainer" containerID="93c4ecbb2fc5e9a98d8f033d7f23ffbb72e4e49c436ae8ec109fca8db0b8e1b0" Jan 27 14:36:50 crc kubenswrapper[4729]: I0127 14:36:50.219055 4729 scope.go:117] "RemoveContainer" containerID="127aa1201f17a442cd9fe2ba21a21c49c6b00f07d05e2f805ca052344b128930" Jan 27 14:36:50 crc kubenswrapper[4729]: I0127 14:36:50.561856 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerStarted","Data":"ec1936de8d2890116939050bd0b5cf30f16f291d40a0779bc7ceb6c085a3222e"} Jan 27 14:36:50 crc kubenswrapper[4729]: I0127 14:36:50.565158 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerStarted","Data":"7d3bcbe986df1138fb5383c3db31b8885d829c3f96c2e4df47a35b72f7a23fd4"} Jan 27 14:36:51 crc kubenswrapper[4729]: I0127 14:36:51.643152 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"51fc79dc-e632-414e-a354-54c3bfd2eb8d","Type":"ContainerStarted","Data":"a724132cf57be657c662e654b0eee34f61c373cb62180d5a83ef8a28d3046b8a"} Jan 27 14:36:51 crc kubenswrapper[4729]: I0127 14:36:51.645978 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 27 14:36:51 crc kubenswrapper[4729]: I0127 14:36:51.673136 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"3c869358-ae88-4f4a-9317-4e1176fdb199","Type":"ContainerStarted","Data":"da7e2274faef510c82c5ddbafba8adf0927d795e3fbcb3fd74005938ed778f3c"} Jan 27 14:36:51 crc kubenswrapper[4729]: I0127 14:36:51.680283 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.113960695 podStartE2EDuration="6.680263656s" podCreationTimestamp="2026-01-27 14:36:45 +0000 UTC" firstStartedPulling="2026-01-27 14:36:47.784292194 +0000 UTC m=+1894.368483198" lastFinishedPulling="2026-01-27 14:36:50.350595155 +0000 UTC m=+1896.934786159" observedRunningTime="2026-01-27 14:36:51.674252142 +0000 UTC m=+1898.258443166" watchObservedRunningTime="2026-01-27 14:36:51.680263656 +0000 UTC m=+1898.264454660" Jan 27 14:36:51 crc kubenswrapper[4729]: I0127 14:36:51.705998 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.896419815 podStartE2EDuration="5.705970447s" podCreationTimestamp="2026-01-27 14:36:46 +0000 UTC" firstStartedPulling="2026-01-27 14:36:48.409491342 +0000 UTC m=+1894.993682346" lastFinishedPulling="2026-01-27 14:36:50.219041974 +0000 UTC m=+1896.803232978" observedRunningTime="2026-01-27 14:36:51.702949289 +0000 UTC m=+1898.287140313" watchObservedRunningTime="2026-01-27 14:36:51.705970447 +0000 UTC m=+1898.290161451" Jan 27 14:36:52 crc kubenswrapper[4729]: I0127 14:36:52.688040 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerStarted","Data":"bfebea4c55188d95cb23cce50991d7654823280f297b3edcfe85e0615b76f836"} Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.700689 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerStarted","Data":"6d929759630f38caad87b6f9924d2654ec5df357e09ddd16f832616fa77b993b"} Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.705086 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-api" containerID="cri-o://df74511b53b53b407af2cb969fc0aa4f5a33635f629af82acc321a8618df665f" gracePeriod=30 Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.705605 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-listener" containerID="cri-o://aec31dcba4c301eceb4d77c4b3bad68f42cb072a92a8d1327736b8579ac4d9aa" gracePeriod=30 Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.705678 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-notifier" containerID="cri-o://7d3bcbe986df1138fb5383c3db31b8885d829c3f96c2e4df47a35b72f7a23fd4" gracePeriod=30 Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.705705 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerStarted","Data":"aec31dcba4c301eceb4d77c4b3bad68f42cb072a92a8d1327736b8579ac4d9aa"} Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.705726 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-evaluator" containerID="cri-o://0892bdf4ee459835dc4989233aabbbf43cf16f9e40c7040f2d59bd9df7ab4bdf" gracePeriod=30 Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.732825 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.552404149 podStartE2EDuration="19.732797344s" podCreationTimestamp="2026-01-27 14:36:34 +0000 UTC" firstStartedPulling="2026-01-27 14:36:35.686758333 +0000 UTC m=+1882.270949337" lastFinishedPulling="2026-01-27 14:36:52.867151528 +0000 UTC m=+1899.451342532" observedRunningTime="2026-01-27 14:36:53.731046549 +0000 UTC m=+1900.315237563" watchObservedRunningTime="2026-01-27 14:36:53.732797344 +0000 UTC m=+1900.316988358" Jan 27 14:36:53 crc kubenswrapper[4729]: I0127 14:36:53.830979 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.721266 4729 generic.go:334] "Generic (PLEG): container finished" podID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerID="7d3bcbe986df1138fb5383c3db31b8885d829c3f96c2e4df47a35b72f7a23fd4" exitCode=0 Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.721808 4729 generic.go:334] "Generic (PLEG): container finished" podID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerID="0892bdf4ee459835dc4989233aabbbf43cf16f9e40c7040f2d59bd9df7ab4bdf" exitCode=0 Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.722154 4729 generic.go:334] "Generic (PLEG): container finished" podID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerID="df74511b53b53b407af2cb969fc0aa4f5a33635f629af82acc321a8618df665f" exitCode=0 Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.721466 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerDied","Data":"7d3bcbe986df1138fb5383c3db31b8885d829c3f96c2e4df47a35b72f7a23fd4"} Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.722335 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerDied","Data":"0892bdf4ee459835dc4989233aabbbf43cf16f9e40c7040f2d59bd9df7ab4bdf"} Jan 27 14:36:54 crc kubenswrapper[4729]: I0127 14:36:54.722411 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerDied","Data":"df74511b53b53b407af2cb969fc0aa4f5a33635f629af82acc321a8618df665f"} Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.016608 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.749145 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerStarted","Data":"2c5df2d52d90a37480ee9ece26ca3b4cb30b4732d3906c7bf813671f002d8d12"} Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.749607 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-central-agent" containerID="cri-o://ec1936de8d2890116939050bd0b5cf30f16f291d40a0779bc7ceb6c085a3222e" gracePeriod=30 Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.749953 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.750369 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="proxy-httpd" containerID="cri-o://2c5df2d52d90a37480ee9ece26ca3b4cb30b4732d3906c7bf813671f002d8d12" gracePeriod=30 Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.750433 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="sg-core" containerID="cri-o://6d929759630f38caad87b6f9924d2654ec5df357e09ddd16f832616fa77b993b" gracePeriod=30 Jan 27 14:36:56 crc kubenswrapper[4729]: I0127 14:36:56.750486 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-notification-agent" containerID="cri-o://bfebea4c55188d95cb23cce50991d7654823280f297b3edcfe85e0615b76f836" gracePeriod=30 Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.772908 4729 generic.go:334] "Generic (PLEG): container finished" podID="80896150-cf23-4535-b110-fc0079b0e2f0" containerID="2c5df2d52d90a37480ee9ece26ca3b4cb30b4732d3906c7bf813671f002d8d12" exitCode=0 Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.773422 4729 generic.go:334] "Generic (PLEG): container finished" podID="80896150-cf23-4535-b110-fc0079b0e2f0" containerID="6d929759630f38caad87b6f9924d2654ec5df357e09ddd16f832616fa77b993b" exitCode=2 Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.772966 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerDied","Data":"2c5df2d52d90a37480ee9ece26ca3b4cb30b4732d3906c7bf813671f002d8d12"} Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.773468 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerDied","Data":"6d929759630f38caad87b6f9924d2654ec5df357e09ddd16f832616fa77b993b"} Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.773485 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerDied","Data":"bfebea4c55188d95cb23cce50991d7654823280f297b3edcfe85e0615b76f836"} Jan 27 14:36:57 crc kubenswrapper[4729]: I0127 14:36:57.773437 4729 generic.go:334] "Generic (PLEG): container finished" podID="80896150-cf23-4535-b110-fc0079b0e2f0" containerID="bfebea4c55188d95cb23cce50991d7654823280f297b3edcfe85e0615b76f836" exitCode=0 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.803969 4729 generic.go:334] "Generic (PLEG): container finished" podID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerID="363ff0ad06a2d6c7b2eb159b87d41bba96f7cf7c355c33bc23d9a38addeab9cc" exitCode=137 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.804178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerDied","Data":"363ff0ad06a2d6c7b2eb159b87d41bba96f7cf7c355c33bc23d9a38addeab9cc"} Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.807121 4729 generic.go:334] "Generic (PLEG): container finished" podID="112efdd5-3270-4415-805b-898634eebeb6" containerID="5f5bbaf6967d1fd8d17bbcb7b9cfb3de39f2490d5ee0f165c1570f0b45b18c69" exitCode=137 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.807189 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"112efdd5-3270-4415-805b-898634eebeb6","Type":"ContainerDied","Data":"5f5bbaf6967d1fd8d17bbcb7b9cfb3de39f2490d5ee0f165c1570f0b45b18c69"} Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.809598 4729 generic.go:334] "Generic (PLEG): container finished" podID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerID="c320f5e4171bfde4800e677a059ad36618d63f24632bff89159fe70b19697372" exitCode=137 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.809657 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerDied","Data":"c320f5e4171bfde4800e677a059ad36618d63f24632bff89159fe70b19697372"} Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.812050 4729 generic.go:334] "Generic (PLEG): container finished" podID="2c242cff-8f90-41ee-8c60-62710709cad9" containerID="eeeae124e04392dab420089513dac9100188aa2eb024397f800be106d9e6e70e" exitCode=137 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.812120 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c242cff-8f90-41ee-8c60-62710709cad9","Type":"ContainerDied","Data":"eeeae124e04392dab420089513dac9100188aa2eb024397f800be106d9e6e70e"} Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.815431 4729 generic.go:334] "Generic (PLEG): container finished" podID="80896150-cf23-4535-b110-fc0079b0e2f0" containerID="ec1936de8d2890116939050bd0b5cf30f16f291d40a0779bc7ceb6c085a3222e" exitCode=0 Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.815474 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerDied","Data":"ec1936de8d2890116939050bd0b5cf30f16f291d40a0779bc7ceb6c085a3222e"} Jan 27 14:36:59 crc kubenswrapper[4729]: I0127 14:36:59.985967 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.168634 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.168911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.168976 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.169043 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.169107 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swlmm\" (UniqueName: \"kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.169327 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.169369 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data\") pod \"80896150-cf23-4535-b110-fc0079b0e2f0\" (UID: \"80896150-cf23-4535-b110-fc0079b0e2f0\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.169578 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.170654 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.172058 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.172080 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80896150-cf23-4535-b110-fc0079b0e2f0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.179164 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm" (OuterVolumeSpecName: "kube-api-access-swlmm") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "kube-api-access-swlmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.197405 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts" (OuterVolumeSpecName: "scripts") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.261631 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.274818 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.274846 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.274855 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swlmm\" (UniqueName: \"kubernetes.io/projected/80896150-cf23-4535-b110-fc0079b0e2f0-kube-api-access-swlmm\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.334777 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.377384 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.433334 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data" (OuterVolumeSpecName: "config-data") pod "80896150-cf23-4535-b110-fc0079b0e2f0" (UID: "80896150-cf23-4535-b110-fc0079b0e2f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.483868 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80896150-cf23-4535-b110-fc0079b0e2f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.573210 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.580989 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.592953 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.636772 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692108 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle\") pod \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692326 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs\") pod \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692495 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data\") pod \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692523 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn7vb\" (UniqueName: \"kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb\") pod \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\" (UID: \"68e3ce57-7767-4d97-909d-b5f2f3c402e2\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692596 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data\") pod \"2c242cff-8f90-41ee-8c60-62710709cad9\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692634 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data\") pod \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle\") pod \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.692997 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs\") pod \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.693065 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttck\" (UniqueName: \"kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck\") pod \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\" (UID: \"93e7d1df-5e4c-4683-b45a-351aabeeb3bb\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.693095 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle\") pod \"2c242cff-8f90-41ee-8c60-62710709cad9\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.693121 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msxlx\" (UniqueName: \"kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx\") pod \"2c242cff-8f90-41ee-8c60-62710709cad9\" (UID: \"2c242cff-8f90-41ee-8c60-62710709cad9\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.698652 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs" (OuterVolumeSpecName: "logs") pod "93e7d1df-5e4c-4683-b45a-351aabeeb3bb" (UID: "93e7d1df-5e4c-4683-b45a-351aabeeb3bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.699369 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs" (OuterVolumeSpecName: "logs") pod "68e3ce57-7767-4d97-909d-b5f2f3c402e2" (UID: "68e3ce57-7767-4d97-909d-b5f2f3c402e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.701999 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx" (OuterVolumeSpecName: "kube-api-access-msxlx") pod "2c242cff-8f90-41ee-8c60-62710709cad9" (UID: "2c242cff-8f90-41ee-8c60-62710709cad9"). InnerVolumeSpecName "kube-api-access-msxlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.702853 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck" (OuterVolumeSpecName: "kube-api-access-sttck") pod "93e7d1df-5e4c-4683-b45a-351aabeeb3bb" (UID: "93e7d1df-5e4c-4683-b45a-351aabeeb3bb"). InnerVolumeSpecName "kube-api-access-sttck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.708646 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb" (OuterVolumeSpecName: "kube-api-access-pn7vb") pod "68e3ce57-7767-4d97-909d-b5f2f3c402e2" (UID: "68e3ce57-7767-4d97-909d-b5f2f3c402e2"). InnerVolumeSpecName "kube-api-access-pn7vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.725341 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68e3ce57-7767-4d97-909d-b5f2f3c402e2" (UID: "68e3ce57-7767-4d97-909d-b5f2f3c402e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.729253 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data" (OuterVolumeSpecName: "config-data") pod "93e7d1df-5e4c-4683-b45a-351aabeeb3bb" (UID: "93e7d1df-5e4c-4683-b45a-351aabeeb3bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.730507 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93e7d1df-5e4c-4683-b45a-351aabeeb3bb" (UID: "93e7d1df-5e4c-4683-b45a-351aabeeb3bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.732000 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data" (OuterVolumeSpecName: "config-data") pod "2c242cff-8f90-41ee-8c60-62710709cad9" (UID: "2c242cff-8f90-41ee-8c60-62710709cad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.737564 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c242cff-8f90-41ee-8c60-62710709cad9" (UID: "2c242cff-8f90-41ee-8c60-62710709cad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.743214 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data" (OuterVolumeSpecName: "config-data") pod "68e3ce57-7767-4d97-909d-b5f2f3c402e2" (UID: "68e3ce57-7767-4d97-909d-b5f2f3c402e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.795804 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data\") pod \"112efdd5-3270-4415-805b-898634eebeb6\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.795870 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle\") pod \"112efdd5-3270-4415-805b-898634eebeb6\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.796295 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hgd5\" (UniqueName: \"kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5\") pod \"112efdd5-3270-4415-805b-898634eebeb6\" (UID: \"112efdd5-3270-4415-805b-898634eebeb6\") " Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797243 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797266 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sttck\" (UniqueName: \"kubernetes.io/projected/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-kube-api-access-sttck\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797278 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797287 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msxlx\" (UniqueName: \"kubernetes.io/projected/2c242cff-8f90-41ee-8c60-62710709cad9-kube-api-access-msxlx\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797295 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797304 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68e3ce57-7767-4d97-909d-b5f2f3c402e2-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797312 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68e3ce57-7767-4d97-909d-b5f2f3c402e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797320 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn7vb\" (UniqueName: \"kubernetes.io/projected/68e3ce57-7767-4d97-909d-b5f2f3c402e2-kube-api-access-pn7vb\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797329 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c242cff-8f90-41ee-8c60-62710709cad9-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797336 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.797344 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e7d1df-5e4c-4683-b45a-351aabeeb3bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.800803 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5" (OuterVolumeSpecName: "kube-api-access-8hgd5") pod "112efdd5-3270-4415-805b-898634eebeb6" (UID: "112efdd5-3270-4415-805b-898634eebeb6"). InnerVolumeSpecName "kube-api-access-8hgd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.829444 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2c242cff-8f90-41ee-8c60-62710709cad9","Type":"ContainerDied","Data":"bd3373cc97d3cbad33ab996436f7e46442493868bd9a5949b654e39d0e423d37"} Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.829522 4729 scope.go:117] "RemoveContainer" containerID="eeeae124e04392dab420089513dac9100188aa2eb024397f800be106d9e6e70e" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.829704 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.833074 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data" (OuterVolumeSpecName: "config-data") pod "112efdd5-3270-4415-805b-898634eebeb6" (UID: "112efdd5-3270-4415-805b-898634eebeb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.835374 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "112efdd5-3270-4415-805b-898634eebeb6" (UID: "112efdd5-3270-4415-805b-898634eebeb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.835889 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.836152 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"80896150-cf23-4535-b110-fc0079b0e2f0","Type":"ContainerDied","Data":"75eb86fcb4f777b148e3d5f700f83e29f9deb31433ac6ee15375d15bb91de988"} Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.840537 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"68e3ce57-7767-4d97-909d-b5f2f3c402e2","Type":"ContainerDied","Data":"55b8a415e85079da8e5dd5083331155a31b08cf90b435377dbb5f331acb886f5"} Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.840588 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.842165 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"112efdd5-3270-4415-805b-898634eebeb6","Type":"ContainerDied","Data":"fa2549c5f25c495a67e3c31a166366273b6a04fc753c65f3f6836fb5469f60ce"} Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.842220 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.844331 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93e7d1df-5e4c-4683-b45a-351aabeeb3bb","Type":"ContainerDied","Data":"4bb175dcb08ca999e084bc009506592882aaa40f6e37680f5846ba4b6676cafa"} Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.844405 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.888703 4729 scope.go:117] "RemoveContainer" containerID="2c5df2d52d90a37480ee9ece26ca3b4cb30b4732d3906c7bf813671f002d8d12" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.899922 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.899955 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112efdd5-3270-4415-805b-898634eebeb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.899966 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hgd5\" (UniqueName: \"kubernetes.io/projected/112efdd5-3270-4415-805b-898634eebeb6-kube-api-access-8hgd5\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.900658 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.946319 4729 scope.go:117] "RemoveContainer" containerID="6d929759630f38caad87b6f9924d2654ec5df357e09ddd16f832616fa77b993b" Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.946679 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.968894 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:00 crc kubenswrapper[4729]: I0127 14:37:00.985955 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005063 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005684 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-metadata" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005700 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-metadata" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005717 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-log" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005723 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-log" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005742 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c242cff-8f90-41ee-8c60-62710709cad9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005748 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c242cff-8f90-41ee-8c60-62710709cad9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005770 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="sg-core" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005776 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="sg-core" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005789 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112efdd5-3270-4415-805b-898634eebeb6" containerName="nova-scheduler-scheduler" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005794 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="112efdd5-3270-4415-805b-898634eebeb6" containerName="nova-scheduler-scheduler" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005807 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-log" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005814 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-log" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005829 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-api" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005834 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-api" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005848 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-central-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005854 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-central-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005866 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-notification-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005871 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-notification-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: E0127 14:37:01.005901 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="proxy-httpd" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.005908 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="proxy-httpd" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006105 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="112efdd5-3270-4415-805b-898634eebeb6" containerName="nova-scheduler-scheduler" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006120 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-log" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006135 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-notification-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006151 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-api" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006161 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" containerName="nova-metadata-metadata" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006170 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="proxy-httpd" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006186 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="sg-core" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006200 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c242cff-8f90-41ee-8c60-62710709cad9" containerName="nova-cell1-novncproxy-novncproxy" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006209 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" containerName="ceilometer-central-agent" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.006225 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" containerName="nova-api-log" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.024611 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.030846 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.031768 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.031797 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.065340 4729 scope.go:117] "RemoveContainer" containerID="bfebea4c55188d95cb23cce50991d7654823280f297b3edcfe85e0615b76f836" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.073761 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.092784 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.105645 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.109338 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.112116 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.112368 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.113303 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.114859 4729 scope.go:117] "RemoveContainer" containerID="ec1936de8d2890116939050bd0b5cf30f16f291d40a0779bc7ceb6c085a3222e" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.121298 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.134664 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.134733 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw29d\" (UniqueName: \"kubernetes.io/projected/9201a195-6f0c-4521-a4d9-a31706dbedce-kube-api-access-lw29d\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.135012 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.135141 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.135204 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.139523 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.154032 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.155101 4729 scope.go:117] "RemoveContainer" containerID="363ff0ad06a2d6c7b2eb159b87d41bba96f7cf7c355c33bc23d9a38addeab9cc" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.168741 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.171321 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.175484 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.176456 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.182014 4729 scope.go:117] "RemoveContainer" containerID="2a9b4e0146be64f0e0084035c096cee30272b6947fbd05a1324c8588f8d63dfe" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.186142 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.209708 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.226907 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.237793 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4dbk\" (UniqueName: \"kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.237921 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.237973 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw29d\" (UniqueName: \"kubernetes.io/projected/9201a195-6f0c-4521-a4d9-a31706dbedce-kube-api-access-lw29d\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238000 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238028 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238057 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238081 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238205 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238258 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238305 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238336 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238362 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238405 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238446 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238478 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9rx6\" (UniqueName: \"kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238518 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.238547 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.242922 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.244071 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.245202 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.245956 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.245981 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9201a195-6f0c-4521-a4d9-a31706dbedce-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.246234 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.249359 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.260949 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw29d\" (UniqueName: \"kubernetes.io/projected/9201a195-6f0c-4521-a4d9-a31706dbedce-kube-api-access-lw29d\") pod \"nova-cell1-novncproxy-0\" (UID: \"9201a195-6f0c-4521-a4d9-a31706dbedce\") " pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.271038 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.288270 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.290310 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.292601 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.304140 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.320755 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.342570 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.342837 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4dbk\" (UniqueName: \"kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.343187 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb7dm\" (UniqueName: \"kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.343344 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.344156 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.344316 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.345514 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.345565 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346139 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346308 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346381 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346437 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346493 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346554 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346618 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346711 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9rx6\" (UniqueName: \"kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346819 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.346905 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.349149 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.349936 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.361339 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4dbk\" (UniqueName: \"kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.363371 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.363459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.363711 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.363997 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.364162 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.364197 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.365916 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data\") pod \"nova-metadata-0\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.367513 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9rx6\" (UniqueName: \"kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6\") pod \"ceilometer-0\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.381413 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.431311 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449444 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449476 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449565 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449607 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449698 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb7dm\" (UniqueName: \"kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.449723 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s675\" (UniqueName: \"kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.450766 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.457780 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.457835 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.471422 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb7dm\" (UniqueName: \"kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm\") pod \"nova-api-0\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.551800 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.551936 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s675\" (UniqueName: \"kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.552121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.560412 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.562090 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.582669 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s675\" (UniqueName: \"kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675\") pod \"nova-scheduler-0\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.630916 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.635336 4729 scope.go:117] "RemoveContainer" containerID="5f5bbaf6967d1fd8d17bbcb7b9cfb3de39f2490d5ee0f165c1570f0b45b18c69" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.643737 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.653176 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.712216 4729 scope.go:117] "RemoveContainer" containerID="c320f5e4171bfde4800e677a059ad36618d63f24632bff89159fe70b19697372" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.783703 4729 scope.go:117] "RemoveContainer" containerID="eff650bff9b57d19e0b2374265daff0191406a667aa431bfe63838ba9839a386" Jan 27 14:37:01 crc kubenswrapper[4729]: I0127 14:37:01.965330 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 27 14:37:02 crc kubenswrapper[4729]: W0127 14:37:02.067574 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcce1286_b6d3_417f_abf4_7f076bede9a8.slice/crio-acc7f6806bf30c568547952080e3635ee1f4c288fc216abb5064d90550db01fe WatchSource:0}: Error finding container acc7f6806bf30c568547952080e3635ee1f4c288fc216abb5064d90550db01fe: Status 404 returned error can't find the container with id acc7f6806bf30c568547952080e3635ee1f4c288fc216abb5064d90550db01fe Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.067674 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112efdd5-3270-4415-805b-898634eebeb6" path="/var/lib/kubelet/pods/112efdd5-3270-4415-805b-898634eebeb6/volumes" Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.069290 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c242cff-8f90-41ee-8c60-62710709cad9" path="/var/lib/kubelet/pods/2c242cff-8f90-41ee-8c60-62710709cad9/volumes" Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.070025 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e3ce57-7767-4d97-909d-b5f2f3c402e2" path="/var/lib/kubelet/pods/68e3ce57-7767-4d97-909d-b5f2f3c402e2/volumes" Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.071363 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80896150-cf23-4535-b110-fc0079b0e2f0" path="/var/lib/kubelet/pods/80896150-cf23-4535-b110-fc0079b0e2f0/volumes" Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.072768 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e7d1df-5e4c-4683-b45a-351aabeeb3bb" path="/var/lib/kubelet/pods/93e7d1df-5e4c-4683-b45a-351aabeeb3bb/volumes" Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.073575 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.304574 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.404776 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:02 crc kubenswrapper[4729]: W0127 14:37:02.441141 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod539c7eb8_acc5_46b1_af9d_5cf1e5265f94.slice/crio-cdb6ae7f942fa808fa4f1b37d840649b6ab9d657c227cf254da5db8918d020f0 WatchSource:0}: Error finding container cdb6ae7f942fa808fa4f1b37d840649b6ab9d657c227cf254da5db8918d020f0: Status 404 returned error can't find the container with id cdb6ae7f942fa808fa4f1b37d840649b6ab9d657c227cf254da5db8918d020f0 Jan 27 14:37:02 crc kubenswrapper[4729]: I0127 14:37:02.477484 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:02 crc kubenswrapper[4729]: W0127 14:37:02.481607 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55db7a86_2ec5_4dbd_a1e2_c2a29c9ebfa7.slice/crio-11a7e0c274b61eccad7f386014f016d25fc41ea81377eb20705512e32521109f WatchSource:0}: Error finding container 11a7e0c274b61eccad7f386014f016d25fc41ea81377eb20705512e32521109f: Status 404 returned error can't find the container with id 11a7e0c274b61eccad7f386014f016d25fc41ea81377eb20705512e32521109f Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.000367 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerStarted","Data":"ea1f6ecd843e7624a7469cc0c4bb3b4cf5eee185f9b7fb8e350621a98ab3506c"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.000434 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerStarted","Data":"11a7e0c274b61eccad7f386014f016d25fc41ea81377eb20705512e32521109f"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.015373 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerStarted","Data":"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.015435 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerStarted","Data":"c3939f7e13e54601e1676c878760dad707f9c6230e4811836ce3acfc77655331"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.025107 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerStarted","Data":"acc7f6806bf30c568547952080e3635ee1f4c288fc216abb5064d90550db01fe"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.036852 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"539c7eb8-acc5-46b1-af9d-5cf1e5265f94","Type":"ContainerStarted","Data":"daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.036925 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"539c7eb8-acc5-46b1-af9d-5cf1e5265f94","Type":"ContainerStarted","Data":"cdb6ae7f942fa808fa4f1b37d840649b6ab9d657c227cf254da5db8918d020f0"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.046675 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9201a195-6f0c-4521-a4d9-a31706dbedce","Type":"ContainerStarted","Data":"e856c76f3aac4b84c76e848f399009c74a5bea09dcb41262314ff031143ad2ae"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.046711 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9201a195-6f0c-4521-a4d9-a31706dbedce","Type":"ContainerStarted","Data":"74472fb800ced56479fb4322965422d8a74335a7a9d2e2b18305f663546b187f"} Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.072283 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.072255384 podStartE2EDuration="3.072255384s" podCreationTimestamp="2026-01-27 14:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:03.057784193 +0000 UTC m=+1909.641975197" watchObservedRunningTime="2026-01-27 14:37:03.072255384 +0000 UTC m=+1909.656446388" Jan 27 14:37:03 crc kubenswrapper[4729]: I0127 14:37:03.157133 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.157107855 podStartE2EDuration="3.157107855s" podCreationTimestamp="2026-01-27 14:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:03.086464329 +0000 UTC m=+1909.670655333" watchObservedRunningTime="2026-01-27 14:37:03.157107855 +0000 UTC m=+1909.741298869" Jan 27 14:37:04 crc kubenswrapper[4729]: I0127 14:37:04.093696 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerStarted","Data":"9ee3ef79698d1fa79db9541b28591fd7ef8ab381f444f3cc5d30d0e580fd88e5"} Jan 27 14:37:04 crc kubenswrapper[4729]: I0127 14:37:04.101117 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerStarted","Data":"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77"} Jan 27 14:37:04 crc kubenswrapper[4729]: I0127 14:37:04.107383 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerStarted","Data":"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e"} Jan 27 14:37:04 crc kubenswrapper[4729]: I0127 14:37:04.175285 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.17525626 podStartE2EDuration="4.17525626s" podCreationTimestamp="2026-01-27 14:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:04.15226902 +0000 UTC m=+1910.736460034" watchObservedRunningTime="2026-01-27 14:37:04.17525626 +0000 UTC m=+1910.759447264" Jan 27 14:37:04 crc kubenswrapper[4729]: I0127 14:37:04.189125 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.189095186 podStartE2EDuration="4.189095186s" podCreationTimestamp="2026-01-27 14:37:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:04.172314855 +0000 UTC m=+1910.756505869" watchObservedRunningTime="2026-01-27 14:37:04.189095186 +0000 UTC m=+1910.773286190" Jan 27 14:37:05 crc kubenswrapper[4729]: I0127 14:37:05.120019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerStarted","Data":"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909"} Jan 27 14:37:06 crc kubenswrapper[4729]: I0127 14:37:06.132173 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerStarted","Data":"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6"} Jan 27 14:37:06 crc kubenswrapper[4729]: I0127 14:37:06.381928 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:06 crc kubenswrapper[4729]: I0127 14:37:06.631189 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:37:06 crc kubenswrapper[4729]: I0127 14:37:06.631313 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:37:06 crc kubenswrapper[4729]: I0127 14:37:06.653504 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:37:08 crc kubenswrapper[4729]: I0127 14:37:08.161640 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerStarted","Data":"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb"} Jan 27 14:37:08 crc kubenswrapper[4729]: I0127 14:37:08.162246 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:37:08 crc kubenswrapper[4729]: I0127 14:37:08.202139 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.045852565 podStartE2EDuration="8.202116655s" podCreationTimestamp="2026-01-27 14:37:00 +0000 UTC" firstStartedPulling="2026-01-27 14:37:02.069598327 +0000 UTC m=+1908.653789341" lastFinishedPulling="2026-01-27 14:37:07.225862427 +0000 UTC m=+1913.810053431" observedRunningTime="2026-01-27 14:37:08.182932282 +0000 UTC m=+1914.767123296" watchObservedRunningTime="2026-01-27 14:37:08.202116655 +0000 UTC m=+1914.786307659" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.382334 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.400002 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.631368 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.631438 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.644584 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.644671 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.654317 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:37:11 crc kubenswrapper[4729]: I0127 14:37:11.691267 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.238686 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.260795 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.502166 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wds9p"] Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.504125 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.508966 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.509280 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.548957 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wds9p"] Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.608951 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.609352 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvl87\" (UniqueName: \"kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.609836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.610053 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.649292 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.649645 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.712697 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.712827 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvl87\" (UniqueName: \"kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.713012 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.713093 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.721463 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.722414 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.722507 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.732137 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.732166 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.748977 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvl87\" (UniqueName: \"kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87\") pod \"nova-cell1-cell-mapping-wds9p\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:12 crc kubenswrapper[4729]: I0127 14:37:12.860864 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:13 crc kubenswrapper[4729]: I0127 14:37:13.459498 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wds9p"] Jan 27 14:37:14 crc kubenswrapper[4729]: I0127 14:37:14.269097 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wds9p" event={"ID":"582e9a31-a273-45ab-a05f-9bacd55948d6","Type":"ContainerStarted","Data":"0431cc85fe0c45d29497b70a5d41aa363bf7cba9535e3fac8d6071b24ddbcd5f"} Jan 27 14:37:14 crc kubenswrapper[4729]: I0127 14:37:14.269484 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wds9p" event={"ID":"582e9a31-a273-45ab-a05f-9bacd55948d6","Type":"ContainerStarted","Data":"146896bb76e60b83c2faf880e8c3352415d1008ce4796a4ca9e432c97e326264"} Jan 27 14:37:14 crc kubenswrapper[4729]: I0127 14:37:14.285291 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wds9p" podStartSLOduration=2.285271055 podStartE2EDuration="2.285271055s" podCreationTimestamp="2026-01-27 14:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:14.282575396 +0000 UTC m=+1920.866766400" watchObservedRunningTime="2026-01-27 14:37:14.285271055 +0000 UTC m=+1920.869462059" Jan 27 14:37:19 crc kubenswrapper[4729]: I0127 14:37:19.334898 4729 generic.go:334] "Generic (PLEG): container finished" podID="582e9a31-a273-45ab-a05f-9bacd55948d6" containerID="0431cc85fe0c45d29497b70a5d41aa363bf7cba9535e3fac8d6071b24ddbcd5f" exitCode=0 Jan 27 14:37:19 crc kubenswrapper[4729]: I0127 14:37:19.335109 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wds9p" event={"ID":"582e9a31-a273-45ab-a05f-9bacd55948d6","Type":"ContainerDied","Data":"0431cc85fe0c45d29497b70a5d41aa363bf7cba9535e3fac8d6071b24ddbcd5f"} Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.765843 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.876040 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts\") pod \"582e9a31-a273-45ab-a05f-9bacd55948d6\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.876213 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data\") pod \"582e9a31-a273-45ab-a05f-9bacd55948d6\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.876313 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvl87\" (UniqueName: \"kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87\") pod \"582e9a31-a273-45ab-a05f-9bacd55948d6\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.876375 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle\") pod \"582e9a31-a273-45ab-a05f-9bacd55948d6\" (UID: \"582e9a31-a273-45ab-a05f-9bacd55948d6\") " Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.882565 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts" (OuterVolumeSpecName: "scripts") pod "582e9a31-a273-45ab-a05f-9bacd55948d6" (UID: "582e9a31-a273-45ab-a05f-9bacd55948d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.883375 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87" (OuterVolumeSpecName: "kube-api-access-xvl87") pod "582e9a31-a273-45ab-a05f-9bacd55948d6" (UID: "582e9a31-a273-45ab-a05f-9bacd55948d6"). InnerVolumeSpecName "kube-api-access-xvl87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.908835 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data" (OuterVolumeSpecName: "config-data") pod "582e9a31-a273-45ab-a05f-9bacd55948d6" (UID: "582e9a31-a273-45ab-a05f-9bacd55948d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.916795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582e9a31-a273-45ab-a05f-9bacd55948d6" (UID: "582e9a31-a273-45ab-a05f-9bacd55948d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.980314 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.980364 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.980377 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvl87\" (UniqueName: \"kubernetes.io/projected/582e9a31-a273-45ab-a05f-9bacd55948d6-kube-api-access-xvl87\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:20 crc kubenswrapper[4729]: I0127 14:37:20.980391 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582e9a31-a273-45ab-a05f-9bacd55948d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.363531 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wds9p" event={"ID":"582e9a31-a273-45ab-a05f-9bacd55948d6","Type":"ContainerDied","Data":"146896bb76e60b83c2faf880e8c3352415d1008ce4796a4ca9e432c97e326264"} Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.363571 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146896bb76e60b83c2faf880e8c3352415d1008ce4796a4ca9e432c97e326264" Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.363936 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wds9p" Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.536459 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.536916 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-log" containerID="cri-o://ea1f6ecd843e7624a7469cc0c4bb3b4cf5eee185f9b7fb8e350621a98ab3506c" gracePeriod=30 Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.537043 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-api" containerID="cri-o://9ee3ef79698d1fa79db9541b28591fd7ef8ab381f444f3cc5d30d0e580fd88e5" gracePeriod=30 Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.561445 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.562052 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerName="nova-scheduler-scheduler" containerID="cri-o://daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" gracePeriod=30 Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.618634 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.618836 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-log" containerID="cri-o://2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3" gracePeriod=30 Jan 27 14:37:21 crc kubenswrapper[4729]: I0127 14:37:21.618965 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-metadata" containerID="cri-o://43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77" gracePeriod=30 Jan 27 14:37:21 crc kubenswrapper[4729]: E0127 14:37:21.662641 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:37:21 crc kubenswrapper[4729]: E0127 14:37:21.668321 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:37:21 crc kubenswrapper[4729]: E0127 14:37:21.671585 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 27 14:37:21 crc kubenswrapper[4729]: E0127 14:37:21.671642 4729 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerName="nova-scheduler-scheduler" Jan 27 14:37:22 crc kubenswrapper[4729]: I0127 14:37:22.380979 4729 generic.go:334] "Generic (PLEG): container finished" podID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerID="2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3" exitCode=143 Jan 27 14:37:22 crc kubenswrapper[4729]: I0127 14:37:22.381053 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerDied","Data":"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3"} Jan 27 14:37:22 crc kubenswrapper[4729]: I0127 14:37:22.383805 4729 generic.go:334] "Generic (PLEG): container finished" podID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerID="ea1f6ecd843e7624a7469cc0c4bb3b4cf5eee185f9b7fb8e350621a98ab3506c" exitCode=143 Jan 27 14:37:22 crc kubenswrapper[4729]: I0127 14:37:22.383869 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerDied","Data":"ea1f6ecd843e7624a7469cc0c4bb3b4cf5eee185f9b7fb8e350621a98ab3506c"} Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.414062 4729 generic.go:334] "Generic (PLEG): container finished" podID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerID="aec31dcba4c301eceb4d77c4b3bad68f42cb072a92a8d1327736b8579ac4d9aa" exitCode=137 Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.414151 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerDied","Data":"aec31dcba4c301eceb4d77c4b3bad68f42cb072a92a8d1327736b8579ac4d9aa"} Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.721542 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.776162 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") pod \"88a128d2-d2b3-4e01-8384-b4263e97ee51\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.776366 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts\") pod \"88a128d2-d2b3-4e01-8384-b4263e97ee51\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.776542 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data\") pod \"88a128d2-d2b3-4e01-8384-b4263e97ee51\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.776580 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7m4\" (UniqueName: \"kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4\") pod \"88a128d2-d2b3-4e01-8384-b4263e97ee51\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.782668 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4" (OuterVolumeSpecName: "kube-api-access-jk7m4") pod "88a128d2-d2b3-4e01-8384-b4263e97ee51" (UID: "88a128d2-d2b3-4e01-8384-b4263e97ee51"). InnerVolumeSpecName "kube-api-access-jk7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.783496 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts" (OuterVolumeSpecName: "scripts") pod "88a128d2-d2b3-4e01-8384-b4263e97ee51" (UID: "88a128d2-d2b3-4e01-8384-b4263e97ee51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.880938 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.881153 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk7m4\" (UniqueName: \"kubernetes.io/projected/88a128d2-d2b3-4e01-8384-b4263e97ee51-kube-api-access-jk7m4\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:24 crc kubenswrapper[4729]: E0127 14:37:24.939838 4729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle podName:88a128d2-d2b3-4e01-8384-b4263e97ee51 nodeName:}" failed. No retries permitted until 2026-01-27 14:37:25.439802233 +0000 UTC m=+1932.023993237 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle") pod "88a128d2-d2b3-4e01-8384-b4263e97ee51" (UID: "88a128d2-d2b3-4e01-8384-b4263e97ee51") : error deleting /var/lib/kubelet/pods/88a128d2-d2b3-4e01-8384-b4263e97ee51/volume-subpaths: remove /var/lib/kubelet/pods/88a128d2-d2b3-4e01-8384-b4263e97ee51/volume-subpaths: no such file or directory Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.942905 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data" (OuterVolumeSpecName: "config-data") pod "88a128d2-d2b3-4e01-8384-b4263e97ee51" (UID: "88a128d2-d2b3-4e01-8384-b4263e97ee51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:24 crc kubenswrapper[4729]: I0127 14:37:24.983307 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.343018 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.392346 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle\") pod \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.392758 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs\") pod \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.392911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4dbk\" (UniqueName: \"kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk\") pod \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.392994 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data\") pod \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.397577 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs\") pod \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\" (UID: \"b4c3812c-02a2-412f-b893-6b5a70e6a66b\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.393492 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs" (OuterVolumeSpecName: "logs") pod "b4c3812c-02a2-412f-b893-6b5a70e6a66b" (UID: "b4c3812c-02a2-412f-b893-6b5a70e6a66b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.398287 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk" (OuterVolumeSpecName: "kube-api-access-l4dbk") pod "b4c3812c-02a2-412f-b893-6b5a70e6a66b" (UID: "b4c3812c-02a2-412f-b893-6b5a70e6a66b"). InnerVolumeSpecName "kube-api-access-l4dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.400523 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4c3812c-02a2-412f-b893-6b5a70e6a66b-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.400556 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4dbk\" (UniqueName: \"kubernetes.io/projected/b4c3812c-02a2-412f-b893-6b5a70e6a66b-kube-api-access-l4dbk\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.434259 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4c3812c-02a2-412f-b893-6b5a70e6a66b" (UID: "b4c3812c-02a2-412f-b893-6b5a70e6a66b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.436162 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data" (OuterVolumeSpecName: "config-data") pod "b4c3812c-02a2-412f-b893-6b5a70e6a66b" (UID: "b4c3812c-02a2-412f-b893-6b5a70e6a66b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.443275 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"88a128d2-d2b3-4e01-8384-b4263e97ee51","Type":"ContainerDied","Data":"b949013be380241c91eb5799650e719d736a26751fddea350a8a7467033e1cb9"} Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.443341 4729 scope.go:117] "RemoveContainer" containerID="aec31dcba4c301eceb4d77c4b3bad68f42cb072a92a8d1327736b8579ac4d9aa" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.443542 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.453108 4729 generic.go:334] "Generic (PLEG): container finished" podID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerID="daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" exitCode=0 Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.453174 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"539c7eb8-acc5-46b1-af9d-5cf1e5265f94","Type":"ContainerDied","Data":"daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad"} Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.463261 4729 generic.go:334] "Generic (PLEG): container finished" podID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerID="9ee3ef79698d1fa79db9541b28591fd7ef8ab381f444f3cc5d30d0e580fd88e5" exitCode=0 Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.463328 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerDied","Data":"9ee3ef79698d1fa79db9541b28591fd7ef8ab381f444f3cc5d30d0e580fd88e5"} Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.473903 4729 generic.go:334] "Generic (PLEG): container finished" podID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerID="43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77" exitCode=0 Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.473957 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerDied","Data":"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77"} Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.473983 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4c3812c-02a2-412f-b893-6b5a70e6a66b","Type":"ContainerDied","Data":"c3939f7e13e54601e1676c878760dad707f9c6230e4811836ce3acfc77655331"} Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.474040 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.487790 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b4c3812c-02a2-412f-b893-6b5a70e6a66b" (UID: "b4c3812c-02a2-412f-b893-6b5a70e6a66b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.488062 4729 scope.go:117] "RemoveContainer" containerID="7d3bcbe986df1138fb5383c3db31b8885d829c3f96c2e4df47a35b72f7a23fd4" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.502017 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") pod \"88a128d2-d2b3-4e01-8384-b4263e97ee51\" (UID: \"88a128d2-d2b3-4e01-8384-b4263e97ee51\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.503360 4729 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.503579 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.503676 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4c3812c-02a2-412f-b893-6b5a70e6a66b-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.506300 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88a128d2-d2b3-4e01-8384-b4263e97ee51" (UID: "88a128d2-d2b3-4e01-8384-b4263e97ee51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.514759 4729 scope.go:117] "RemoveContainer" containerID="0892bdf4ee459835dc4989233aabbbf43cf16f9e40c7040f2d59bd9df7ab4bdf" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.550383 4729 scope.go:117] "RemoveContainer" containerID="df74511b53b53b407af2cb969fc0aa4f5a33635f629af82acc321a8618df665f" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.585611 4729 scope.go:117] "RemoveContainer" containerID="43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.606121 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88a128d2-d2b3-4e01-8384-b4263e97ee51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.636561 4729 scope.go:117] "RemoveContainer" containerID="2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.674084 4729 scope.go:117] "RemoveContainer" containerID="43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77" Jan 27 14:37:25 crc kubenswrapper[4729]: E0127 14:37:25.674427 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77\": container with ID starting with 43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77 not found: ID does not exist" containerID="43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.674465 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77"} err="failed to get container status \"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77\": rpc error: code = NotFound desc = could not find container \"43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77\": container with ID starting with 43f0f45e5404438dba3eb54b8be788d42b97f026e811086f04e691fd08fd7e77 not found: ID does not exist" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.674492 4729 scope.go:117] "RemoveContainer" containerID="2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3" Jan 27 14:37:25 crc kubenswrapper[4729]: E0127 14:37:25.674902 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3\": container with ID starting with 2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3 not found: ID does not exist" containerID="2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.674919 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3"} err="failed to get container status \"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3\": rpc error: code = NotFound desc = could not find container \"2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3\": container with ID starting with 2442bbabaa2db2cde24f4083c4212679692a70ca102a570e2bebb162cb5a43b3 not found: ID does not exist" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.756810 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.811325 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb7dm\" (UniqueName: \"kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm\") pod \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.811719 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle\") pod \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.814202 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs\") pod \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.814366 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data\") pod \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\" (UID: \"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.816063 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs" (OuterVolumeSpecName: "logs") pod "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" (UID: "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.821787 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.847741 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm" (OuterVolumeSpecName: "kube-api-access-pb7dm") pod "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" (UID: "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7"). InnerVolumeSpecName "kube-api-access-pb7dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.851933 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" (UID: "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.861633 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.869177 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data" (OuterVolumeSpecName: "config-data") pod "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" (UID: "55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.923139 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s675\" (UniqueName: \"kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675\") pod \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.923256 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle\") pod \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.923293 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data\") pod \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\" (UID: \"539c7eb8-acc5-46b1-af9d-5cf1e5265f94\") " Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.924003 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.924020 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.924029 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb7dm\" (UniqueName: \"kubernetes.io/projected/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7-kube-api-access-pb7dm\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.927484 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675" (OuterVolumeSpecName: "kube-api-access-4s675") pod "539c7eb8-acc5-46b1-af9d-5cf1e5265f94" (UID: "539c7eb8-acc5-46b1-af9d-5cf1e5265f94"). InnerVolumeSpecName "kube-api-access-4s675". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.968060 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data" (OuterVolumeSpecName: "config-data") pod "539c7eb8-acc5-46b1-af9d-5cf1e5265f94" (UID: "539c7eb8-acc5-46b1-af9d-5cf1e5265f94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:25 crc kubenswrapper[4729]: I0127 14:37:25.982722 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "539c7eb8-acc5-46b1-af9d-5cf1e5265f94" (UID: "539c7eb8-acc5-46b1-af9d-5cf1e5265f94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.026193 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.026224 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.026235 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s675\" (UniqueName: \"kubernetes.io/projected/539c7eb8-acc5-46b1-af9d-5cf1e5265f94-kube-api-access-4s675\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.118083 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.139715 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.159228 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.177021 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195303 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195774 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-metadata" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195790 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-metadata" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195803 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582e9a31-a273-45ab-a05f-9bacd55948d6" containerName="nova-manage" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195809 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e9a31-a273-45ab-a05f-9bacd55948d6" containerName="nova-manage" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195819 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-log" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195826 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-log" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195838 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-evaluator" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195844 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-evaluator" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195857 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-api" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195862 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-api" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195895 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-api" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195902 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-api" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195916 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerName="nova-scheduler-scheduler" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195921 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerName="nova-scheduler-scheduler" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195934 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-log" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195940 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-log" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195950 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-notifier" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195956 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-notifier" Jan 27 14:37:26 crc kubenswrapper[4729]: E0127 14:37:26.195968 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-listener" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.195974 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-listener" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196167 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-listener" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196183 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" containerName="nova-scheduler-scheduler" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196196 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="582e9a31-a273-45ab-a05f-9bacd55948d6" containerName="nova-manage" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196207 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-api" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196219 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-api" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196231 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-evaluator" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196239 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-metadata" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196250 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" containerName="aodh-notifier" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196259 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" containerName="nova-metadata-log" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.196268 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" containerName="nova-api-log" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.197476 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.199576 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.199744 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.214569 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.230911 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.230999 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5945a095-d047-46d4-aa7d-3989268e88f9-logs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.231060 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.231164 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wsr\" (UniqueName: \"kubernetes.io/projected/5945a095-d047-46d4-aa7d-3989268e88f9-kube-api-access-z9wsr\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.231237 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-config-data\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.235114 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.238463 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.241392 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.241833 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zvnk7" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.242415 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.242414 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.244489 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.263407 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335075 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335165 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5945a095-d047-46d4-aa7d-3989268e88f9-logs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335284 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335334 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxsc\" (UniqueName: \"kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335423 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335490 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wsr\" (UniqueName: \"kubernetes.io/projected/5945a095-d047-46d4-aa7d-3989268e88f9-kube-api-access-z9wsr\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335536 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335606 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-config-data\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335640 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.335679 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.336754 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5945a095-d047-46d4-aa7d-3989268e88f9-logs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.340994 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-config-data\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.341587 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.341980 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5945a095-d047-46d4-aa7d-3989268e88f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.350744 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wsr\" (UniqueName: \"kubernetes.io/projected/5945a095-d047-46d4-aa7d-3989268e88f9-kube-api-access-z9wsr\") pod \"nova-metadata-0\" (UID: \"5945a095-d047-46d4-aa7d-3989268e88f9\") " pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.437892 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxsc\" (UniqueName: \"kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.438009 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.438054 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.438117 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.438154 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.438232 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.442490 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.444862 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.445663 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.446212 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.447644 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.458394 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxsc\" (UniqueName: \"kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc\") pod \"aodh-0\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.490086 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"539c7eb8-acc5-46b1-af9d-5cf1e5265f94","Type":"ContainerDied","Data":"cdb6ae7f942fa808fa4f1b37d840649b6ab9d657c227cf254da5db8918d020f0"} Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.490259 4729 scope.go:117] "RemoveContainer" containerID="daa8fa97c763708dbec7272075058d4ca10023efdd569cb59a8a8903ec1e05ad" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.490629 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.497547 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7","Type":"ContainerDied","Data":"11a7e0c274b61eccad7f386014f016d25fc41ea81377eb20705512e32521109f"} Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.497628 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.525700 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.545323 4729 scope.go:117] "RemoveContainer" containerID="9ee3ef79698d1fa79db9541b28591fd7ef8ab381f444f3cc5d30d0e580fd88e5" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.548776 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.553550 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.566608 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.569104 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.571265 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.578222 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.610021 4729 scope.go:117] "RemoveContainer" containerID="ea1f6ecd843e7624a7469cc0c4bb3b4cf5eee185f9b7fb8e350621a98ab3506c" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.610145 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.639946 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.644529 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7th7s\" (UniqueName: \"kubernetes.io/projected/96379486-5600-4752-9729-0fc090685ea4-kube-api-access-7th7s\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.644599 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-config-data\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.644755 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.678543 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.691755 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.693601 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.700955 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.717111 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746023 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746084 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-config-data\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746183 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746209 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746225 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqfb\" (UniqueName: \"kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746272 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.746360 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7th7s\" (UniqueName: \"kubernetes.io/projected/96379486-5600-4752-9729-0fc090685ea4-kube-api-access-7th7s\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.757699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.768584 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96379486-5600-4752-9729-0fc090685ea4-config-data\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.813023 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7th7s\" (UniqueName: \"kubernetes.io/projected/96379486-5600-4752-9729-0fc090685ea4-kube-api-access-7th7s\") pod \"nova-scheduler-0\" (UID: \"96379486-5600-4752-9729-0fc090685ea4\") " pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.861641 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.861706 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.861730 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvqfb\" (UniqueName: \"kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.861984 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.862663 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.871510 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.881211 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.894253 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 27 14:37:26 crc kubenswrapper[4729]: I0127 14:37:26.900418 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvqfb\" (UniqueName: \"kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb\") pod \"nova-api-0\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " pod="openstack/nova-api-0" Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.042718 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.410364 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.524798 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:37:27 crc kubenswrapper[4729]: W0127 14:37:27.530117 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c8103be_a039_4e55_88f3_9e75f2f123dc.slice/crio-65dce45fac20859e8edcbfc5bc5c78c9556472f53f135a92d591e12b034e833e WatchSource:0}: Error finding container 65dce45fac20859e8edcbfc5bc5c78c9556472f53f135a92d591e12b034e833e: Status 404 returned error can't find the container with id 65dce45fac20859e8edcbfc5bc5c78c9556472f53f135a92d591e12b034e833e Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.531195 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5945a095-d047-46d4-aa7d-3989268e88f9","Type":"ContainerStarted","Data":"255a2aca776669e2c7908a82c0c48ae1a48055b361ac862107a542877cf4c90c"} Jan 27 14:37:27 crc kubenswrapper[4729]: W0127 14:37:27.588357 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8885a95_dae2_4ac3_92f1_217cd7302498.slice/crio-1632213ff657c6e66be06bbeb8318e8a860fb76bd51dbb55fbf79bb63e6b0cc6 WatchSource:0}: Error finding container 1632213ff657c6e66be06bbeb8318e8a860fb76bd51dbb55fbf79bb63e6b0cc6: Status 404 returned error can't find the container with id 1632213ff657c6e66be06bbeb8318e8a860fb76bd51dbb55fbf79bb63e6b0cc6 Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.590195 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:27 crc kubenswrapper[4729]: I0127 14:37:27.603930 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.069795 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="539c7eb8-acc5-46b1-af9d-5cf1e5265f94" path="/var/lib/kubelet/pods/539c7eb8-acc5-46b1-af9d-5cf1e5265f94/volumes" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.070620 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7" path="/var/lib/kubelet/pods/55db7a86-2ec5-4dbd-a1e2-c2a29c9ebfa7/volumes" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.071188 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88a128d2-d2b3-4e01-8384-b4263e97ee51" path="/var/lib/kubelet/pods/88a128d2-d2b3-4e01-8384-b4263e97ee51/volumes" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.072436 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4c3812c-02a2-412f-b893-6b5a70e6a66b" path="/var/lib/kubelet/pods/b4c3812c-02a2-412f-b893-6b5a70e6a66b/volumes" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.553069 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerStarted","Data":"65dce45fac20859e8edcbfc5bc5c78c9556472f53f135a92d591e12b034e833e"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.584416 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5945a095-d047-46d4-aa7d-3989268e88f9","Type":"ContainerStarted","Data":"d8b876f5a43d303ddada1e2c3d96fa46d1ac34f304716c1a68394268d95482a9"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.584471 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5945a095-d047-46d4-aa7d-3989268e88f9","Type":"ContainerStarted","Data":"6069a081dc3f7c0ffb2594e2a537601627211f0b0b8573990c12abf197363e88"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.591200 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96379486-5600-4752-9729-0fc090685ea4","Type":"ContainerStarted","Data":"0d39273113bbf8008f3373b311e49b21218750d33005b298e8415118329e787b"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.591255 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96379486-5600-4752-9729-0fc090685ea4","Type":"ContainerStarted","Data":"2fd0c76a8da33eb64e0c59147b6eaff5a5d1a7dc77601be6475571511009df0b"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.599383 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerStarted","Data":"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.599442 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerStarted","Data":"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.599453 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerStarted","Data":"1632213ff657c6e66be06bbeb8318e8a860fb76bd51dbb55fbf79bb63e6b0cc6"} Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.612546 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6125294180000003 podStartE2EDuration="2.612529418s" podCreationTimestamp="2026-01-27 14:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:28.611471771 +0000 UTC m=+1935.195662775" watchObservedRunningTime="2026-01-27 14:37:28.612529418 +0000 UTC m=+1935.196720422" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.637812 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6377974379999998 podStartE2EDuration="2.637797438s" podCreationTimestamp="2026-01-27 14:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:28.632133811 +0000 UTC m=+1935.216324815" watchObservedRunningTime="2026-01-27 14:37:28.637797438 +0000 UTC m=+1935.221988442" Jan 27 14:37:28 crc kubenswrapper[4729]: I0127 14:37:28.653349 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.653323187 podStartE2EDuration="2.653323187s" podCreationTimestamp="2026-01-27 14:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:28.648330888 +0000 UTC m=+1935.232521892" watchObservedRunningTime="2026-01-27 14:37:28.653323187 +0000 UTC m=+1935.237514211" Jan 27 14:37:29 crc kubenswrapper[4729]: I0127 14:37:29.612375 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerStarted","Data":"386ea332bce9cc4275f68ebca243629ae7bac2dd75d3a167410fea880f3a046c"} Jan 27 14:37:30 crc kubenswrapper[4729]: I0127 14:37:30.627171 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerStarted","Data":"d8d24774e098daca70ccfe9ec0639e146e9ed676ba8f0562868f5d420f9fda94"} Jan 27 14:37:31 crc kubenswrapper[4729]: I0127 14:37:31.445955 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:37:31 crc kubenswrapper[4729]: I0127 14:37:31.553917 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:37:31 crc kubenswrapper[4729]: I0127 14:37:31.553968 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 27 14:37:31 crc kubenswrapper[4729]: I0127 14:37:31.640459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerStarted","Data":"39b926b6eaa04d6f5bdef659507224f01935695b0ed457e08163eb269499d1ff"} Jan 27 14:37:31 crc kubenswrapper[4729]: I0127 14:37:31.895373 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 27 14:37:32 crc kubenswrapper[4729]: I0127 14:37:32.654710 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerStarted","Data":"074e9cd7635bad89c24dc98ce10472fdb1437435842d8a3db4187f453baa064f"} Jan 27 14:37:32 crc kubenswrapper[4729]: I0127 14:37:32.696545 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.435934619 podStartE2EDuration="6.696522111s" podCreationTimestamp="2026-01-27 14:37:26 +0000 UTC" firstStartedPulling="2026-01-27 14:37:27.53199422 +0000 UTC m=+1934.116185224" lastFinishedPulling="2026-01-27 14:37:31.792581712 +0000 UTC m=+1938.376772716" observedRunningTime="2026-01-27 14:37:32.673054469 +0000 UTC m=+1939.257245503" watchObservedRunningTime="2026-01-27 14:37:32.696522111 +0000 UTC m=+1939.280713115" Jan 27 14:37:36 crc kubenswrapper[4729]: I0127 14:37:36.554255 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:37:36 crc kubenswrapper[4729]: I0127 14:37:36.555557 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 27 14:37:36 crc kubenswrapper[4729]: I0127 14:37:36.895791 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 27 14:37:36 crc kubenswrapper[4729]: I0127 14:37:36.956313 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 27 14:37:37 crc kubenswrapper[4729]: I0127 14:37:37.044051 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:37:37 crc kubenswrapper[4729]: I0127 14:37:37.044106 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:37:37 crc kubenswrapper[4729]: I0127 14:37:37.570101 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5945a095-d047-46d4-aa7d-3989268e88f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:37 crc kubenswrapper[4729]: I0127 14:37:37.570347 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5945a095-d047-46d4-aa7d-3989268e88f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:37 crc kubenswrapper[4729]: I0127 14:37:37.763391 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 27 14:37:38 crc kubenswrapper[4729]: I0127 14:37:38.126050 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:38 crc kubenswrapper[4729]: I0127 14:37:38.126075 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:37:46 crc kubenswrapper[4729]: I0127 14:37:46.560601 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:37:46 crc kubenswrapper[4729]: I0127 14:37:46.561271 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 27 14:37:46 crc kubenswrapper[4729]: I0127 14:37:46.568840 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:37:46 crc kubenswrapper[4729]: I0127 14:37:46.569349 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.047218 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.048408 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.052531 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.053632 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.853214 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:37:47 crc kubenswrapper[4729]: I0127 14:37:47.858912 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.080540 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.082604 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.129843 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.130274 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.130339 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.130466 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.130497 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.130589 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cx2m\" (UniqueName: \"kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.145744 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232279 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232347 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cx2m\" (UniqueName: \"kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232387 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232415 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.232455 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.234244 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.235618 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.236664 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.237976 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.238112 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.257417 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cx2m\" (UniqueName: \"kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m\") pod \"dnsmasq-dns-f84f9ccf-7dvpt\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:48 crc kubenswrapper[4729]: I0127 14:37:48.438258 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:49 crc kubenswrapper[4729]: I0127 14:37:49.013056 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:37:49 crc kubenswrapper[4729]: I0127 14:37:49.913731 4729 generic.go:334] "Generic (PLEG): container finished" podID="060e677e-e80b-48ad-9f73-1242976939c5" containerID="6d9619b42295ad0e3ab7ceecd3304a9079a371dbba1fe4f1f4a845a234518432" exitCode=0 Jan 27 14:37:49 crc kubenswrapper[4729]: I0127 14:37:49.914840 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" event={"ID":"060e677e-e80b-48ad-9f73-1242976939c5","Type":"ContainerDied","Data":"6d9619b42295ad0e3ab7ceecd3304a9079a371dbba1fe4f1f4a845a234518432"} Jan 27 14:37:49 crc kubenswrapper[4729]: I0127 14:37:49.914915 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" event={"ID":"060e677e-e80b-48ad-9f73-1242976939c5","Type":"ContainerStarted","Data":"6a8fef118ba792178ad218f54caf453d54f583a91c6b245eb2a0991e7cf77bb9"} Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.652279 4729 scope.go:117] "RemoveContainer" containerID="f1f7c16478d54dd23e814d41deeb7510904ec075a7563052eb1273d6a22a4eae" Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.699794 4729 scope.go:117] "RemoveContainer" containerID="9680fb622bd803d2de59a7fddc3a1d506360aaa5f37f9deedc39d8d81bd12c65" Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.754958 4729 scope.go:117] "RemoveContainer" containerID="423747c95184e848b8c7fd0bc26a76f0d6f5a246f87678fdde9714c2c7f4fcc5" Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.758399 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.821646 4729 scope.go:117] "RemoveContainer" containerID="d3dc3d8fbfb4f2ff47d1f545a3371f55ce4f69874c6aa23dfa46dc2a802e8f97" Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.850606 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.851725 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="sg-core" containerID="cri-o://764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.851865 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="proxy-httpd" containerID="cri-o://4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.852733 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-notification-agent" containerID="cri-o://1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.852898 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-central-agent" containerID="cri-o://fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.886193 4729 scope.go:117] "RemoveContainer" containerID="b1157d29a5982f84630323de4c4399db2961b1582d1c2aa36ee6cae42f0e9f6d" Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.941396 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" event={"ID":"060e677e-e80b-48ad-9f73-1242976939c5","Type":"ContainerStarted","Data":"f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868"} Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.941431 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-log" containerID="cri-o://1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.941503 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-api" containerID="cri-o://ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d" gracePeriod=30 Jan 27 14:37:50 crc kubenswrapper[4729]: I0127 14:37:50.973147 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" podStartSLOduration=2.973125307 podStartE2EDuration="2.973125307s" podCreationTimestamp="2026-01-27 14:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:50.968521078 +0000 UTC m=+1957.552712092" watchObservedRunningTime="2026-01-27 14:37:50.973125307 +0000 UTC m=+1957.557316321" Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.959237 4729 generic.go:334] "Generic (PLEG): container finished" podID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerID="1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc" exitCode=143 Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.959368 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerDied","Data":"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc"} Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965651 4729 generic.go:334] "Generic (PLEG): container finished" podID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerID="4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb" exitCode=0 Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965690 4729 generic.go:334] "Generic (PLEG): container finished" podID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerID="764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6" exitCode=2 Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965702 4729 generic.go:334] "Generic (PLEG): container finished" podID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerID="fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e" exitCode=0 Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965760 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerDied","Data":"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb"} Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965847 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerDied","Data":"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6"} Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.965867 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerDied","Data":"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e"} Jan 27 14:37:51 crc kubenswrapper[4729]: I0127 14:37:51.966142 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:52 crc kubenswrapper[4729]: I0127 14:37:52.655426 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:37:52 crc kubenswrapper[4729]: I0127 14:37:52.655502 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.778500 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901055 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901250 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901595 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9rx6\" (UniqueName: \"kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901624 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901646 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.901689 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.902057 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.902096 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.902638 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.902730 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml\") pod \"bcce1286-b6d3-417f-abf4-7f076bede9a8\" (UID: \"bcce1286-b6d3-417f-abf4-7f076bede9a8\") " Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.906223 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.906258 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcce1286-b6d3-417f-abf4-7f076bede9a8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.908178 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6" (OuterVolumeSpecName: "kube-api-access-m9rx6") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "kube-api-access-m9rx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.911266 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts" (OuterVolumeSpecName: "scripts") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.945398 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.987706 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.991464 4729 generic.go:334] "Generic (PLEG): container finished" podID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerID="1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909" exitCode=0 Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.991503 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerDied","Data":"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909"} Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.991529 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcce1286-b6d3-417f-abf4-7f076bede9a8","Type":"ContainerDied","Data":"acc7f6806bf30c568547952080e3635ee1f4c288fc216abb5064d90550db01fe"} Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.991546 4729 scope.go:117] "RemoveContainer" containerID="4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb" Jan 27 14:37:53 crc kubenswrapper[4729]: I0127 14:37:53.991728 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.007668 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.007694 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9rx6\" (UniqueName: \"kubernetes.io/projected/bcce1286-b6d3-417f-abf4-7f076bede9a8-kube-api-access-m9rx6\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.007705 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.007714 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.042731 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.047938 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data" (OuterVolumeSpecName: "config-data") pod "bcce1286-b6d3-417f-abf4-7f076bede9a8" (UID: "bcce1286-b6d3-417f-abf4-7f076bede9a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.085083 4729 scope.go:117] "RemoveContainer" containerID="764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.109575 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.109607 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcce1286-b6d3-417f-abf4-7f076bede9a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.123181 4729 scope.go:117] "RemoveContainer" containerID="1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.193152 4729 scope.go:117] "RemoveContainer" containerID="fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.215693 4729 scope.go:117] "RemoveContainer" containerID="4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.216390 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb\": container with ID starting with 4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb not found: ID does not exist" containerID="4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.216447 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb"} err="failed to get container status \"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb\": rpc error: code = NotFound desc = could not find container \"4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb\": container with ID starting with 4fd9cb62fd4c54baa5e803f01cfbf59a989d95362d98114f1ded03c8c5d80fdb not found: ID does not exist" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.216489 4729 scope.go:117] "RemoveContainer" containerID="764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.217196 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6\": container with ID starting with 764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6 not found: ID does not exist" containerID="764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.217243 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6"} err="failed to get container status \"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6\": rpc error: code = NotFound desc = could not find container \"764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6\": container with ID starting with 764fa41bcd4b9a9965902c0bdda5f2229006cdf46040ce7cb12a3a76cb0c77e6 not found: ID does not exist" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.217272 4729 scope.go:117] "RemoveContainer" containerID="1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.217673 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909\": container with ID starting with 1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909 not found: ID does not exist" containerID="1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.217735 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909"} err="failed to get container status \"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909\": rpc error: code = NotFound desc = could not find container \"1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909\": container with ID starting with 1ca629af0efb4949434ff4f8df89e570050862a1c04a5cdedc6ad56718b13909 not found: ID does not exist" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.217774 4729 scope.go:117] "RemoveContainer" containerID="fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.218106 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e\": container with ID starting with fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e not found: ID does not exist" containerID="fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.218133 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e"} err="failed to get container status \"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e\": rpc error: code = NotFound desc = could not find container \"fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e\": container with ID starting with fbae520379f4302c2b8433a2593604962c655d224d9cd2b34b3751ec75cb230e not found: ID does not exist" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.373541 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.405304 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.425032 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.425792 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-notification-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.425914 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-notification-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.426034 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="sg-core" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426103 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="sg-core" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.426198 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-central-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426269 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-central-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: E0127 14:37:54.426363 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="proxy-httpd" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426437 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="proxy-httpd" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426761 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-central-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426852 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="sg-core" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.426948 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="proxy-httpd" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.427025 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" containerName="ceilometer-notification-agent" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.429898 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.433737 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.433955 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.434125 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.437331 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622163 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622239 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622311 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622345 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622379 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstv9\" (UniqueName: \"kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.622625 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724425 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724479 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724510 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724526 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724554 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724578 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.724601 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstv9\" (UniqueName: \"kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.725693 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.725814 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.730125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.731339 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.731689 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.732515 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.733151 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.741495 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstv9\" (UniqueName: \"kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9\") pod \"ceilometer-0\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.765197 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.896703 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.933562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvqfb\" (UniqueName: \"kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb\") pod \"d8885a95-dae2-4ac3-92f1-217cd7302498\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.933644 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs\") pod \"d8885a95-dae2-4ac3-92f1-217cd7302498\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.933744 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data\") pod \"d8885a95-dae2-4ac3-92f1-217cd7302498\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.933783 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle\") pod \"d8885a95-dae2-4ac3-92f1-217cd7302498\" (UID: \"d8885a95-dae2-4ac3-92f1-217cd7302498\") " Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.935713 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs" (OuterVolumeSpecName: "logs") pod "d8885a95-dae2-4ac3-92f1-217cd7302498" (UID: "d8885a95-dae2-4ac3-92f1-217cd7302498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:37:54 crc kubenswrapper[4729]: I0127 14:37:54.946025 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb" (OuterVolumeSpecName: "kube-api-access-xvqfb") pod "d8885a95-dae2-4ac3-92f1-217cd7302498" (UID: "d8885a95-dae2-4ac3-92f1-217cd7302498"). InnerVolumeSpecName "kube-api-access-xvqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.021240 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8885a95-dae2-4ac3-92f1-217cd7302498" (UID: "d8885a95-dae2-4ac3-92f1-217cd7302498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.029480 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data" (OuterVolumeSpecName: "config-data") pod "d8885a95-dae2-4ac3-92f1-217cd7302498" (UID: "d8885a95-dae2-4ac3-92f1-217cd7302498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.036161 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvqfb\" (UniqueName: \"kubernetes.io/projected/d8885a95-dae2-4ac3-92f1-217cd7302498-kube-api-access-xvqfb\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.036195 4729 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8885a95-dae2-4ac3-92f1-217cd7302498-logs\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.036205 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.036216 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8885a95-dae2-4ac3-92f1-217cd7302498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.041766 4729 generic.go:334] "Generic (PLEG): container finished" podID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerID="ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d" exitCode=0 Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.041812 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerDied","Data":"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d"} Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.041843 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8885a95-dae2-4ac3-92f1-217cd7302498","Type":"ContainerDied","Data":"1632213ff657c6e66be06bbeb8318e8a860fb76bd51dbb55fbf79bb63e6b0cc6"} Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.041862 4729 scope.go:117] "RemoveContainer" containerID="ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.041991 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.115344 4729 scope.go:117] "RemoveContainer" containerID="1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.122172 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.143123 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.164134 4729 scope.go:117] "RemoveContainer" containerID="ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.164220 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: E0127 14:37:55.165068 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-api" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.165093 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-api" Jan 27 14:37:55 crc kubenswrapper[4729]: E0127 14:37:55.165147 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-log" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.165155 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-log" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.165475 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-log" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.165497 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" containerName="nova-api-api" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.167370 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: E0127 14:37:55.168534 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d\": container with ID starting with ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d not found: ID does not exist" containerID="ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.168583 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d"} err="failed to get container status \"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d\": rpc error: code = NotFound desc = could not find container \"ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d\": container with ID starting with ea674e76d95c43712b039220b22deb12e252bd4d507f587bad240c1d18f0218d not found: ID does not exist" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.168611 4729 scope.go:117] "RemoveContainer" containerID="1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc" Jan 27 14:37:55 crc kubenswrapper[4729]: E0127 14:37:55.169209 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc\": container with ID starting with 1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc not found: ID does not exist" containerID="1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.169239 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc"} err="failed to get container status \"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc\": rpc error: code = NotFound desc = could not find container \"1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc\": container with ID starting with 1af408019068ea296d4f3b464cda6bb9a3a7fe025dc13a6d568efe7b3dc462cc not found: ID does not exist" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.169834 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.170054 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.169898 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.181429 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.342823 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.342872 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwq9f\" (UniqueName: \"kubernetes.io/projected/48379ff4-1d0a-400d-a40b-a3ed65415c39-kube-api-access-gwq9f\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.343229 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.343280 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-config-data\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.343300 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48379ff4-1d0a-400d-a40b-a3ed65415c39-logs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.343324 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.380158 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: W0127 14:37:55.380760 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d56250f_a83a_4920_bada_5c1a2aa546c5.slice/crio-086298e08057c4df9a637dc239275790d53dd75d105b10bfb8a4e91cf4e9e18f WatchSource:0}: Error finding container 086298e08057c4df9a637dc239275790d53dd75d105b10bfb8a4e91cf4e9e18f: Status 404 returned error can't find the container with id 086298e08057c4df9a637dc239275790d53dd75d105b10bfb8a4e91cf4e9e18f Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445633 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445714 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-config-data\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48379ff4-1d0a-400d-a40b-a3ed65415c39-logs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445769 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445923 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.447314 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48379ff4-1d0a-400d-a40b-a3ed65415c39-logs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.445948 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwq9f\" (UniqueName: \"kubernetes.io/projected/48379ff4-1d0a-400d-a40b-a3ed65415c39-kube-api-access-gwq9f\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.451198 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-public-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.451416 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-internal-tls-certs\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.460495 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.462680 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48379ff4-1d0a-400d-a40b-a3ed65415c39-config-data\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.463100 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwq9f\" (UniqueName: \"kubernetes.io/projected/48379ff4-1d0a-400d-a40b-a3ed65415c39-kube-api-access-gwq9f\") pod \"nova-api-0\" (UID: \"48379ff4-1d0a-400d-a40b-a3ed65415c39\") " pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.491463 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 27 14:37:55 crc kubenswrapper[4729]: I0127 14:37:55.960210 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 27 14:37:55 crc kubenswrapper[4729]: W0127 14:37:55.964538 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48379ff4_1d0a_400d_a40b_a3ed65415c39.slice/crio-f0865e0be79636303379c1f95cabde5f14df76ac32c78aee1ecf3ac60f63572b WatchSource:0}: Error finding container f0865e0be79636303379c1f95cabde5f14df76ac32c78aee1ecf3ac60f63572b: Status 404 returned error can't find the container with id f0865e0be79636303379c1f95cabde5f14df76ac32c78aee1ecf3ac60f63572b Jan 27 14:37:56 crc kubenswrapper[4729]: I0127 14:37:56.092453 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcce1286-b6d3-417f-abf4-7f076bede9a8" path="/var/lib/kubelet/pods/bcce1286-b6d3-417f-abf4-7f076bede9a8/volumes" Jan 27 14:37:56 crc kubenswrapper[4729]: I0127 14:37:56.094491 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8885a95-dae2-4ac3-92f1-217cd7302498" path="/var/lib/kubelet/pods/d8885a95-dae2-4ac3-92f1-217cd7302498/volumes" Jan 27 14:37:56 crc kubenswrapper[4729]: I0127 14:37:56.095241 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48379ff4-1d0a-400d-a40b-a3ed65415c39","Type":"ContainerStarted","Data":"f0865e0be79636303379c1f95cabde5f14df76ac32c78aee1ecf3ac60f63572b"} Jan 27 14:37:56 crc kubenswrapper[4729]: I0127 14:37:56.095269 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerStarted","Data":"086298e08057c4df9a637dc239275790d53dd75d105b10bfb8a4e91cf4e9e18f"} Jan 27 14:37:57 crc kubenswrapper[4729]: I0127 14:37:57.072766 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerStarted","Data":"986cbaba6a70637f749db8a4cf8beab12173d17431770d4bb4234f10d117b23b"} Jan 27 14:37:57 crc kubenswrapper[4729]: I0127 14:37:57.074889 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48379ff4-1d0a-400d-a40b-a3ed65415c39","Type":"ContainerStarted","Data":"2667996e63b8144c54b2d085fe1573b798f8b554f5eb7b040c52343d855ab1f8"} Jan 27 14:37:57 crc kubenswrapper[4729]: I0127 14:37:57.074982 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"48379ff4-1d0a-400d-a40b-a3ed65415c39","Type":"ContainerStarted","Data":"a1559bb48f17b59b2d6e79065f1d1c47a04d5e8cfcc538d23310e23ec45299dd"} Jan 27 14:37:57 crc kubenswrapper[4729]: I0127 14:37:57.104983 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.104962347 podStartE2EDuration="2.104962347s" podCreationTimestamp="2026-01-27 14:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:37:57.095457233 +0000 UTC m=+1963.679648247" watchObservedRunningTime="2026-01-27 14:37:57.104962347 +0000 UTC m=+1963.689153351" Jan 27 14:37:58 crc kubenswrapper[4729]: I0127 14:37:58.440161 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:37:58 crc kubenswrapper[4729]: I0127 14:37:58.509023 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:37:58 crc kubenswrapper[4729]: I0127 14:37:58.509284 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="dnsmasq-dns" containerID="cri-o://d041c1cfb3c6745063ba7829a6344f90337f4e4cd012197dfd6e02498cd8e6b1" gracePeriod=10 Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.115273 4729 generic.go:334] "Generic (PLEG): container finished" podID="c412d342-19d8-4203-9b84-a57c996cb21b" containerID="d041c1cfb3c6745063ba7829a6344f90337f4e4cd012197dfd6e02498cd8e6b1" exitCode=0 Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.115602 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" event={"ID":"c412d342-19d8-4203-9b84-a57c996cb21b","Type":"ContainerDied","Data":"d041c1cfb3c6745063ba7829a6344f90337f4e4cd012197dfd6e02498cd8e6b1"} Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.126562 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerStarted","Data":"9dc61fc3d303278f346a6c0b9e9a491ddf77332c7f054d3ea6eb0511c06bb67b"} Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.385331 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.473798 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.473898 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.474119 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862kg\" (UniqueName: \"kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.474241 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.474330 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.474394 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0\") pod \"c412d342-19d8-4203-9b84-a57c996cb21b\" (UID: \"c412d342-19d8-4203-9b84-a57c996cb21b\") " Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.482130 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg" (OuterVolumeSpecName: "kube-api-access-862kg") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "kube-api-access-862kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.546056 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.567405 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.573323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.573829 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.576676 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.576707 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.576717 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.576726 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.576756 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862kg\" (UniqueName: \"kubernetes.io/projected/c412d342-19d8-4203-9b84-a57c996cb21b-kube-api-access-862kg\") on node \"crc\" DevicePath \"\"" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.587863 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config" (OuterVolumeSpecName: "config") pod "c412d342-19d8-4203-9b84-a57c996cb21b" (UID: "c412d342-19d8-4203-9b84-a57c996cb21b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:37:59 crc kubenswrapper[4729]: I0127 14:37:59.678554 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c412d342-19d8-4203-9b84-a57c996cb21b-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.159941 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" event={"ID":"c412d342-19d8-4203-9b84-a57c996cb21b","Type":"ContainerDied","Data":"c67b25e39eaaceeffe436b4fc7c13ebc68ff10fb3770e8641ff316f2128b9f74"} Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.159966 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-fmwhh" Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.159991 4729 scope.go:117] "RemoveContainer" containerID="d041c1cfb3c6745063ba7829a6344f90337f4e4cd012197dfd6e02498cd8e6b1" Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.169855 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerStarted","Data":"c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0"} Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.204549 4729 scope.go:117] "RemoveContainer" containerID="b157ad2d3b07f9cbfcea12e5f3a08b1b60ca941fdb5dd97f3662e9d9b98b7ac2" Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.212504 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:38:00 crc kubenswrapper[4729]: I0127 14:38:00.237311 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-fmwhh"] Jan 27 14:38:02 crc kubenswrapper[4729]: I0127 14:38:02.093791 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" path="/var/lib/kubelet/pods/c412d342-19d8-4203-9b84-a57c996cb21b/volumes" Jan 27 14:38:03 crc kubenswrapper[4729]: I0127 14:38:03.207971 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerStarted","Data":"d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353"} Jan 27 14:38:03 crc kubenswrapper[4729]: I0127 14:38:03.208649 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:38:03 crc kubenswrapper[4729]: I0127 14:38:03.245594 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.325968597 podStartE2EDuration="9.245572313s" podCreationTimestamp="2026-01-27 14:37:54 +0000 UTC" firstStartedPulling="2026-01-27 14:37:55.383799435 +0000 UTC m=+1961.967990439" lastFinishedPulling="2026-01-27 14:38:02.303403151 +0000 UTC m=+1968.887594155" observedRunningTime="2026-01-27 14:38:03.228761591 +0000 UTC m=+1969.812952595" watchObservedRunningTime="2026-01-27 14:38:03.245572313 +0000 UTC m=+1969.829763317" Jan 27 14:38:05 crc kubenswrapper[4729]: I0127 14:38:05.491812 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:38:05 crc kubenswrapper[4729]: I0127 14:38:05.492178 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 27 14:38:06 crc kubenswrapper[4729]: I0127 14:38:06.506069 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48379ff4-1d0a-400d-a40b-a3ed65415c39" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:38:06 crc kubenswrapper[4729]: I0127 14:38:06.506092 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="48379ff4-1d0a-400d-a40b-a3ed65415c39" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 14:38:15 crc kubenswrapper[4729]: I0127 14:38:15.499922 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:38:15 crc kubenswrapper[4729]: I0127 14:38:15.501077 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 27 14:38:15 crc kubenswrapper[4729]: I0127 14:38:15.501119 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:38:15 crc kubenswrapper[4729]: I0127 14:38:15.508375 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:38:16 crc kubenswrapper[4729]: I0127 14:38:16.357926 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 27 14:38:16 crc kubenswrapper[4729]: I0127 14:38:16.367202 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 27 14:38:22 crc kubenswrapper[4729]: I0127 14:38:22.655406 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:38:22 crc kubenswrapper[4729]: I0127 14:38:22.655985 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:38:24 crc kubenswrapper[4729]: I0127 14:38:24.773693 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.705461 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6cnrq"] Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.716480 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6cnrq"] Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.787927 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-t7d94"] Jan 27 14:38:36 crc kubenswrapper[4729]: E0127 14:38:36.788734 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="dnsmasq-dns" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.788830 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="dnsmasq-dns" Jan 27 14:38:36 crc kubenswrapper[4729]: E0127 14:38:36.788987 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="init" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.789066 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="init" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.789425 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c412d342-19d8-4203-9b84-a57c996cb21b" containerName="dnsmasq-dns" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.790503 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.812078 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t7d94"] Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.925355 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.925402 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:36 crc kubenswrapper[4729]: I0127 14:38:36.925571 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdxqw\" (UniqueName: \"kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.027676 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdxqw\" (UniqueName: \"kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.027815 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.027839 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.035494 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.037571 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.079586 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdxqw\" (UniqueName: \"kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw\") pod \"heat-db-sync-t7d94\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.128890 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t7d94" Jan 27 14:38:37 crc kubenswrapper[4729]: I0127 14:38:37.649690 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-t7d94"] Jan 27 14:38:38 crc kubenswrapper[4729]: I0127 14:38:38.085658 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9" path="/var/lib/kubelet/pods/d5ddc1dc-d7c8-4a7a-9467-ee0048d258c9/volumes" Jan 27 14:38:38 crc kubenswrapper[4729]: I0127 14:38:38.641484 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t7d94" event={"ID":"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5","Type":"ContainerStarted","Data":"8c8cdc8a143d6076c2908fc1ef2c7e7de2e757bbc7b51ac96bb52a9912ca4049"} Jan 27 14:38:38 crc kubenswrapper[4729]: I0127 14:38:38.774149 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:38:39 crc kubenswrapper[4729]: I0127 14:38:39.737483 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:38:39 crc kubenswrapper[4729]: I0127 14:38:39.738326 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-central-agent" containerID="cri-o://986cbaba6a70637f749db8a4cf8beab12173d17431770d4bb4234f10d117b23b" gracePeriod=30 Jan 27 14:38:39 crc kubenswrapper[4729]: I0127 14:38:39.738662 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="proxy-httpd" containerID="cri-o://d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353" gracePeriod=30 Jan 27 14:38:39 crc kubenswrapper[4729]: I0127 14:38:39.738758 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-notification-agent" containerID="cri-o://9dc61fc3d303278f346a6c0b9e9a491ddf77332c7f054d3ea6eb0511c06bb67b" gracePeriod=30 Jan 27 14:38:39 crc kubenswrapper[4729]: I0127 14:38:39.738804 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="sg-core" containerID="cri-o://c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0" gracePeriod=30 Jan 27 14:38:40 crc kubenswrapper[4729]: E0127 14:38:40.122071 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d56250f_a83a_4920_bada_5c1a2aa546c5.slice/crio-d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d56250f_a83a_4920_bada_5c1a2aa546c5.slice/crio-c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:38:40 crc kubenswrapper[4729]: I0127 14:38:40.589039 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:38:40 crc kubenswrapper[4729]: I0127 14:38:40.684278 4729 generic.go:334] "Generic (PLEG): container finished" podID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerID="d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353" exitCode=0 Jan 27 14:38:40 crc kubenswrapper[4729]: I0127 14:38:40.684509 4729 generic.go:334] "Generic (PLEG): container finished" podID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerID="c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0" exitCode=2 Jan 27 14:38:40 crc kubenswrapper[4729]: I0127 14:38:40.684478 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerDied","Data":"d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353"} Jan 27 14:38:40 crc kubenswrapper[4729]: I0127 14:38:40.684655 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerDied","Data":"c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0"} Jan 27 14:38:41 crc kubenswrapper[4729]: I0127 14:38:41.709009 4729 generic.go:334] "Generic (PLEG): container finished" podID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerID="986cbaba6a70637f749db8a4cf8beab12173d17431770d4bb4234f10d117b23b" exitCode=0 Jan 27 14:38:41 crc kubenswrapper[4729]: I0127 14:38:41.709052 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerDied","Data":"986cbaba6a70637f749db8a4cf8beab12173d17431770d4bb4234f10d117b23b"} Jan 27 14:38:42 crc kubenswrapper[4729]: I0127 14:38:42.754118 4729 generic.go:334] "Generic (PLEG): container finished" podID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerID="9dc61fc3d303278f346a6c0b9e9a491ddf77332c7f054d3ea6eb0511c06bb67b" exitCode=0 Jan 27 14:38:42 crc kubenswrapper[4729]: I0127 14:38:42.754337 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerDied","Data":"9dc61fc3d303278f346a6c0b9e9a491ddf77332c7f054d3ea6eb0511c06bb67b"} Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.097598 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.144945 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.146361 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.146692 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jstv9\" (UniqueName: \"kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.146779 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.146980 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.158884 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.149406 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.159338 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.160675 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.160743 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs\") pod \"1d56250f-a83a-4920-bada-5c1a2aa546c5\" (UID: \"1d56250f-a83a-4920-bada-5c1a2aa546c5\") " Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.166377 4729 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.166410 4729 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1d56250f-a83a-4920-bada-5c1a2aa546c5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.169238 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9" (OuterVolumeSpecName: "kube-api-access-jstv9") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "kube-api-access-jstv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.170141 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts" (OuterVolumeSpecName: "scripts") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.206058 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.268870 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jstv9\" (UniqueName: \"kubernetes.io/projected/1d56250f-a83a-4920-bada-5c1a2aa546c5-kube-api-access-jstv9\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.268925 4729 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.268939 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.271408 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.283512 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.308400 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data" (OuterVolumeSpecName: "config-data") pod "1d56250f-a83a-4920-bada-5c1a2aa546c5" (UID: "1d56250f-a83a-4920-bada-5c1a2aa546c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.371279 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.371334 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.371349 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d56250f-a83a-4920-bada-5c1a2aa546c5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.799664 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1d56250f-a83a-4920-bada-5c1a2aa546c5","Type":"ContainerDied","Data":"086298e08057c4df9a637dc239275790d53dd75d105b10bfb8a4e91cf4e9e18f"} Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.799740 4729 scope.go:117] "RemoveContainer" containerID="d3bca45ec048d8885a8f7715a7d3dab031e43e575c49e7f22b0c981170403353" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.799726 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.852451 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.872969 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883133 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:38:44 crc kubenswrapper[4729]: E0127 14:38:44.883593 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-notification-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883610 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-notification-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: E0127 14:38:44.883630 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="proxy-httpd" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883636 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="proxy-httpd" Jan 27 14:38:44 crc kubenswrapper[4729]: E0127 14:38:44.883650 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-central-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883660 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-central-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: E0127 14:38:44.883693 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="sg-core" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883699 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="sg-core" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883935 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="sg-core" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883951 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="proxy-httpd" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883966 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-notification-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.883982 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" containerName="ceilometer-central-agent" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.885959 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.891343 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.891568 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.901746 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.918428 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985731 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-scripts\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985784 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-config-data\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985812 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-run-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985841 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985917 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-log-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.985966 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.986043 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvglf\" (UniqueName: \"kubernetes.io/projected/97bf3a8e-2abb-4659-9719-fdffb80a92b1-kube-api-access-dvglf\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:44 crc kubenswrapper[4729]: I0127 14:38:44.986117 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.087924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-log-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088020 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088074 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvglf\" (UniqueName: \"kubernetes.io/projected/97bf3a8e-2abb-4659-9719-fdffb80a92b1-kube-api-access-dvglf\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088316 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-scripts\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088383 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-config-data\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088424 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-run-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088487 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.088487 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-log-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.089670 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bf3a8e-2abb-4659-9719-fdffb80a92b1-run-httpd\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.093581 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.093653 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.094851 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-config-data\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.098865 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.105460 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bf3a8e-2abb-4659-9719-fdffb80a92b1-scripts\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.120660 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvglf\" (UniqueName: \"kubernetes.io/projected/97bf3a8e-2abb-4659-9719-fdffb80a92b1-kube-api-access-dvglf\") pod \"ceilometer-0\" (UID: \"97bf3a8e-2abb-4659-9719-fdffb80a92b1\") " pod="openstack/ceilometer-0" Jan 27 14:38:45 crc kubenswrapper[4729]: I0127 14:38:45.207075 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 27 14:38:46 crc kubenswrapper[4729]: I0127 14:38:46.066910 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d56250f-a83a-4920-bada-5c1a2aa546c5" path="/var/lib/kubelet/pods/1d56250f-a83a-4920-bada-5c1a2aa546c5/volumes" Jan 27 14:38:47 crc kubenswrapper[4729]: I0127 14:38:47.531417 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" containerID="cri-o://9e1a560e097c137694bb9d2b51a4a3e63b94f1421aa80b5afa787aed36c0174b" gracePeriod=604794 Jan 27 14:38:47 crc kubenswrapper[4729]: I0127 14:38:47.585150 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" containerID="cri-o://dccbf84a424976db52668c4deacfb2af44ee3eb6ffeb958a89bd909f12d954ed" gracePeriod=604792 Jan 27 14:38:49 crc kubenswrapper[4729]: I0127 14:38:49.963400 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:38:50 crc kubenswrapper[4729]: I0127 14:38:50.817913 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Jan 27 14:38:52 crc kubenswrapper[4729]: I0127 14:38:52.655435 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:38:52 crc kubenswrapper[4729]: I0127 14:38:52.655788 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:38:52 crc kubenswrapper[4729]: I0127 14:38:52.655852 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:38:52 crc kubenswrapper[4729]: I0127 14:38:52.656693 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:38:52 crc kubenswrapper[4729]: I0127 14:38:52.656780 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698" gracePeriod=600 Jan 27 14:38:53 crc kubenswrapper[4729]: I0127 14:38:53.924954 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698" exitCode=0 Jan 27 14:38:53 crc kubenswrapper[4729]: I0127 14:38:53.924995 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698"} Jan 27 14:38:53 crc kubenswrapper[4729]: I0127 14:38:53.952045 4729 scope.go:117] "RemoveContainer" containerID="c710fbda7b3b6bd50df523acda39205b60a2589f7b918f64ec30fadbb50a3cd0" Jan 27 14:38:58 crc kubenswrapper[4729]: I0127 14:38:58.025883 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerDied","Data":"9e1a560e097c137694bb9d2b51a4a3e63b94f1421aa80b5afa787aed36c0174b"} Jan 27 14:38:58 crc kubenswrapper[4729]: I0127 14:38:58.025868 4729 generic.go:334] "Generic (PLEG): container finished" podID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerID="9e1a560e097c137694bb9d2b51a4a3e63b94f1421aa80b5afa787aed36c0174b" exitCode=0 Jan 27 14:38:59 crc kubenswrapper[4729]: I0127 14:38:59.041742 4729 generic.go:334] "Generic (PLEG): container finished" podID="d148c837-c681-4446-9e81-195c19108d09" containerID="dccbf84a424976db52668c4deacfb2af44ee3eb6ffeb958a89bd909f12d954ed" exitCode=0 Jan 27 14:38:59 crc kubenswrapper[4729]: I0127 14:38:59.041807 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerDied","Data":"dccbf84a424976db52668c4deacfb2af44ee3eb6ffeb958a89bd909f12d954ed"} Jan 27 14:38:59 crc kubenswrapper[4729]: I0127 14:38:59.962437 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Jan 27 14:38:59 crc kubenswrapper[4729]: I0127 14:38:59.982108 4729 scope.go:117] "RemoveContainer" containerID="9dc61fc3d303278f346a6c0b9e9a491ddf77332c7f054d3ea6eb0511c06bb67b" Jan 27 14:39:00 crc kubenswrapper[4729]: I0127 14:39:00.734670 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 27 14:39:01 crc kubenswrapper[4729]: W0127 14:39:01.056527 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97bf3a8e_2abb_4659_9719_fdffb80a92b1.slice/crio-d256c1fde4ce98831e9c47abe59c368d419c54e8d8f8c599bb5c1e79e5491097 WatchSource:0}: Error finding container d256c1fde4ce98831e9c47abe59c368d419c54e8d8f8c599bb5c1e79e5491097: Status 404 returned error can't find the container with id d256c1fde4ce98831e9c47abe59c368d419c54e8d8f8c599bb5c1e79e5491097 Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.091661 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bf3a8e-2abb-4659-9719-fdffb80a92b1","Type":"ContainerStarted","Data":"d256c1fde4ce98831e9c47abe59c368d419c54e8d8f8c599bb5c1e79e5491097"} Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.096143 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"13cfdd20-ad90-472d-8962-6bec29b3fa74","Type":"ContainerDied","Data":"3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53"} Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.096219 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f271adda80a39d7c5d837f3cb070d4a0ae2686d8abe564434031fffc3866c53" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.141909 4729 scope.go:117] "RemoveContainer" containerID="986cbaba6a70637f749db8a4cf8beab12173d17431770d4bb4234f10d117b23b" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.259818 4729 scope.go:117] "RemoveContainer" containerID="387d13e4587b0c0f4710906ff53d7c856d05c3df9f7114c8d592e194545d2898" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.283192 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.291731 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442391 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442460 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442479 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442534 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442568 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442602 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442635 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2llh\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.442669 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.466767 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.466840 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467639 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467721 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467794 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467829 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467858 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmsgt\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467903 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.467986 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.468027 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd\") pod \"13cfdd20-ad90-472d-8962-6bec29b3fa74\" (UID: \"13cfdd20-ad90-472d-8962-6bec29b3fa74\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.468060 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.468117 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.468179 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.468198 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.474580 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.475558 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.481794 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.483774 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.484313 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info" (OuterVolumeSpecName: "pod-info") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.485166 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt" (OuterVolumeSpecName: "kube-api-access-tmsgt") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "kube-api-access-tmsgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.487165 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.496401 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.503188 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info" (OuterVolumeSpecName: "pod-info") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.518940 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.519123 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh" (OuterVolumeSpecName: "kube-api-access-c2llh") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "kube-api-access-c2llh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.564040 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.564128 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.571848 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd" (OuterVolumeSpecName: "persistence") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.583577 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: E0127 14:39:01.589904 4729 reconciler_common.go:156] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") : UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/vol_data.json]: open /var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"d148c837-c681-4446-9e81-195c19108d09\" (UID: \"d148c837-c681-4446-9e81-195c19108d09\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/vol_data.json]: open /var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes/kubernetes.io~csi/pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd/vol_data.json: no such file or directory" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591145 4729 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13cfdd20-ad90-472d-8962-6bec29b3fa74-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591162 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591173 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591184 4729 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591194 4729 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13cfdd20-ad90-472d-8962-6bec29b3fa74-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591204 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591214 4729 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591224 4729 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d148c837-c681-4446-9e81-195c19108d09-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591234 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591243 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2llh\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-kube-api-access-c2llh\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591256 4729 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d148c837-c681-4446-9e81-195c19108d09-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591282 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") on node \"crc\" " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591294 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591305 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.591316 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmsgt\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-kube-api-access-tmsgt\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.593140 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data" (OuterVolumeSpecName: "config-data") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.594489 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data" (OuterVolumeSpecName: "config-data") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.644687 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6" (OuterVolumeSpecName: "persistence") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.693317 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.693382 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") on node \"crc\" " Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.693398 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.750262 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf" (OuterVolumeSpecName: "server-conf") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.752298 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.752307 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.760557 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6") on node "crc" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.760642 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd") on node "crc" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.767285 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf" (OuterVolumeSpecName: "server-conf") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.796117 4729 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/13cfdd20-ad90-472d-8962-6bec29b3fa74-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.796158 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.796173 4729 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d148c837-c681-4446-9e81-195c19108d09-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.796185 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.808992 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "13cfdd20-ad90-472d-8962-6bec29b3fa74" (UID: "13cfdd20-ad90-472d-8962-6bec29b3fa74"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.810280 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d148c837-c681-4446-9e81-195c19108d09" (UID: "d148c837-c681-4446-9e81-195c19108d09"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.898666 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d148c837-c681-4446-9e81-195c19108d09-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:01 crc kubenswrapper[4729]: I0127 14:39:01.898715 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13cfdd20-ad90-472d-8962-6bec29b3fa74-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.130172 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42"} Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.132183 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"d148c837-c681-4446-9e81-195c19108d09","Type":"ContainerDied","Data":"2c63e8eaf8c96324b2b1ebb789413142b9d522fd5aa7ad5f2fb892b7c221cac7"} Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.132235 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.132352 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.203953 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.254261 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.269821 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.279394 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.292481 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:39:02 crc kubenswrapper[4729]: E0127 14:39:02.293093 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="setup-container" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293115 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="setup-container" Jan 27 14:39:02 crc kubenswrapper[4729]: E0127 14:39:02.293140 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293148 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: E0127 14:39:02.293166 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293174 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: E0127 14:39:02.293194 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="setup-container" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293199 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="setup-container" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293425 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.293446 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d148c837-c681-4446-9e81-195c19108d09" containerName="rabbitmq" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.294992 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.298411 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7b4hd" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299077 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299263 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299415 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299575 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299754 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.299927 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.306684 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.309558 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.320553 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.320636 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.320665 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.320903 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.320967 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321089 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321119 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321168 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321208 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321243 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321302 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.321328 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qsz\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-kube-api-access-q4qsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.335736 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424005 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7c7r\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-kube-api-access-w7c7r\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424095 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424121 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424263 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424335 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424376 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b1f41d-64ad-4dec-a082-9e81438dfe0f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424474 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424530 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424594 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424642 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qsz\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-kube-api-access-q4qsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424699 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424818 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424852 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424894 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-config-data\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424932 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424954 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424972 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.424974 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.425226 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.426126 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.426499 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.427228 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.427353 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b1f41d-64ad-4dec-a082-9e81438dfe0f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.427428 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.427499 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.427550 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.429684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.429716 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.431335 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.442562 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qsz\" (UniqueName: \"kubernetes.io/projected/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-kube-api-access-q4qsz\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.443782 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.447082 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.447126 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f096062f903b408b3eba2bc5650a80f546d059a652f497c12a0811100235309f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.529975 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530111 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530155 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530202 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7c7r\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-kube-api-access-w7c7r\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530317 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b1f41d-64ad-4dec-a082-9e81438dfe0f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530396 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530429 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530638 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-config-data\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530691 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530739 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.530775 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b1f41d-64ad-4dec-a082-9e81438dfe0f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.531249 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.531909 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-config-data\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.532393 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.532758 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.533565 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/75b1f41d-64ad-4dec-a082-9e81438dfe0f-server-conf\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.535723 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.545497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/75b1f41d-64ad-4dec-a082-9e81438dfe0f-pod-info\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.550896 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.551416 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.551465 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/79fa808a8f7422c64123714813980a731a32ab42a568b8eb44235174099bb0b9/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.552588 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/75b1f41d-64ad-4dec-a082-9e81438dfe0f-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.555475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7c7r\" (UniqueName: \"kubernetes.io/projected/75b1f41d-64ad-4dec-a082-9e81438dfe0f-kube-api-access-w7c7r\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.575313 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e70e5bf0-fcc7-4ebc-8cd8-93548a2a2ca6\") pod \"rabbitmq-cell1-server-0\" (UID: \"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464\") " pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:02 crc kubenswrapper[4729]: I0127 14:39:02.621148 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.149241 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9e25b5cb-603a-4148-bb8a-22b9f25e8bcd\") pod \"rabbitmq-server-2\" (UID: \"75b1f41d-64ad-4dec-a082-9e81438dfe0f\") " pod="openstack/rabbitmq-server-2" Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.246345 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.306590 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.362366 4729 scope.go:117] "RemoveContainer" containerID="dccbf84a424976db52668c4deacfb2af44ee3eb6ffeb958a89bd909f12d954ed" Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.554549 4729 scope.go:117] "RemoveContainer" containerID="920267262e426a5814f7dcc9824fbb8d062ed5b8d587b97e2c1394ef8c992b51" Jan 27 14:39:03 crc kubenswrapper[4729]: W0127 14:39:03.952958 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75b1f41d_64ad_4dec_a082_9e81438dfe0f.slice/crio-ea2a6b73b1365b50d3f8901670c26e8bb71986316dc4b8ec43de8372a76cb957 WatchSource:0}: Error finding container ea2a6b73b1365b50d3f8901670c26e8bb71986316dc4b8ec43de8372a76cb957: Status 404 returned error can't find the container with id ea2a6b73b1365b50d3f8901670c26e8bb71986316dc4b8ec43de8372a76cb957 Jan 27 14:39:03 crc kubenswrapper[4729]: I0127 14:39:03.964476 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 27 14:39:04 crc kubenswrapper[4729]: I0127 14:39:04.074671 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" path="/var/lib/kubelet/pods/13cfdd20-ad90-472d-8962-6bec29b3fa74/volumes" Jan 27 14:39:04 crc kubenswrapper[4729]: I0127 14:39:04.136059 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d148c837-c681-4446-9e81-195c19108d09" path="/var/lib/kubelet/pods/d148c837-c681-4446-9e81-195c19108d09/volumes" Jan 27 14:39:04 crc kubenswrapper[4729]: I0127 14:39:04.162957 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"75b1f41d-64ad-4dec-a082-9e81438dfe0f","Type":"ContainerStarted","Data":"ea2a6b73b1365b50d3f8901670c26e8bb71986316dc4b8ec43de8372a76cb957"} Jan 27 14:39:04 crc kubenswrapper[4729]: I0127 14:39:04.164630 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464","Type":"ContainerStarted","Data":"b62e3f28028559c48458e50816bca1fabf669be61512326ae0b79b7e28410322"} Jan 27 14:39:05 crc kubenswrapper[4729]: I0127 14:39:05.191960 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t7d94" event={"ID":"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5","Type":"ContainerStarted","Data":"7336c7a1d11614f6671f6a199ad8a8445e0197d363906e6ab6590b2cbcc38100"} Jan 27 14:39:05 crc kubenswrapper[4729]: I0127 14:39:05.226359 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-t7d94" podStartSLOduration=2.915049313 podStartE2EDuration="29.22633748s" podCreationTimestamp="2026-01-27 14:38:36 +0000 UTC" firstStartedPulling="2026-01-27 14:38:37.646982135 +0000 UTC m=+2004.231173139" lastFinishedPulling="2026-01-27 14:39:03.958270302 +0000 UTC m=+2030.542461306" observedRunningTime="2026-01-27 14:39:05.207951407 +0000 UTC m=+2031.792142411" watchObservedRunningTime="2026-01-27 14:39:05.22633748 +0000 UTC m=+2031.810528494" Jan 27 14:39:05 crc kubenswrapper[4729]: I0127 14:39:05.818438 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="13cfdd20-ad90-472d-8962-6bec29b3fa74" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: i/o timeout" Jan 27 14:39:06 crc kubenswrapper[4729]: I0127 14:39:06.207446 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464","Type":"ContainerStarted","Data":"c2337f7dc548e48eded726c58f2ccf48d6b2a558fd6731bf0fdc9862c5cc9ee8"} Jan 27 14:39:07 crc kubenswrapper[4729]: I0127 14:39:07.225459 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"75b1f41d-64ad-4dec-a082-9e81438dfe0f","Type":"ContainerStarted","Data":"295af4352ad75f7bb7249bd7bfd1ad0f798db29f086a345aa003372075cec184"} Jan 27 14:39:08 crc kubenswrapper[4729]: I0127 14:39:08.240312 4729 generic.go:334] "Generic (PLEG): container finished" podID="2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" containerID="7336c7a1d11614f6671f6a199ad8a8445e0197d363906e6ab6590b2cbcc38100" exitCode=0 Jan 27 14:39:08 crc kubenswrapper[4729]: I0127 14:39:08.240413 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t7d94" event={"ID":"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5","Type":"ContainerDied","Data":"7336c7a1d11614f6671f6a199ad8a8445e0197d363906e6ab6590b2cbcc38100"} Jan 27 14:39:08 crc kubenswrapper[4729]: I0127 14:39:08.834275 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.255794 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bf3a8e-2abb-4659-9719-fdffb80a92b1","Type":"ContainerStarted","Data":"bb6eae68d716eecfab91db46791e0757b62525038975d2816ddaa71337b2b0ff"} Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.754068 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t7d94" Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.876254 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdxqw\" (UniqueName: \"kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw\") pod \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.876441 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data\") pod \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.876538 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle\") pod \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\" (UID: \"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5\") " Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.896107 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw" (OuterVolumeSpecName: "kube-api-access-vdxqw") pod "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" (UID: "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5"). InnerVolumeSpecName "kube-api-access-vdxqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.918074 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" (UID: "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.979329 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdxqw\" (UniqueName: \"kubernetes.io/projected/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-kube-api-access-vdxqw\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.979633 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:09 crc kubenswrapper[4729]: I0127 14:39:09.989499 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data" (OuterVolumeSpecName: "config-data") pod "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" (UID: "2272f0db-3c4c-44f6-97a7-685b8c9fd1c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.082080 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.110634 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:10 crc kubenswrapper[4729]: E0127 14:39:10.111324 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" containerName="heat-db-sync" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.111340 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" containerName="heat-db-sync" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.111586 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" containerName="heat-db-sync" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.112964 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.121936 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.165095 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185277 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185342 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185395 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh64b\" (UniqueName: \"kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185657 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185734 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.185770 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.186093 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.269159 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-t7d94" event={"ID":"2272f0db-3c4c-44f6-97a7-685b8c9fd1c5","Type":"ContainerDied","Data":"8c8cdc8a143d6076c2908fc1ef2c7e7de2e757bbc7b51ac96bb52a9912ca4049"} Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.269212 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c8cdc8a143d6076c2908fc1ef2c7e7de2e757bbc7b51ac96bb52a9912ca4049" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.269233 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-t7d94" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.288451 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.288792 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.288947 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289037 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289172 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh64b\" (UniqueName: \"kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289332 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289763 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.289898 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.290289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.290451 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.467149 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.467261 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.471242 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh64b\" (UniqueName: \"kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b\") pod \"dnsmasq-dns-5b75489c6f-l8jxj\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:10 crc kubenswrapper[4729]: I0127 14:39:10.756837 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.308159 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bf3a8e-2abb-4659-9719-fdffb80a92b1","Type":"ContainerStarted","Data":"6c3537ea99fe2e88b6b5ca5263c20c5f801a37950597c14ffc6902f77d6a21c2"} Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.421389 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.787321 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-795d549794-t2xb4"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.789775 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.799993 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5ff89df78c-6425l"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.801417 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.811128 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ff89df78c-6425l"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.882361 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-795d549794-t2xb4"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.921998 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7fff5b4d49-29sw4"] Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.924445 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938169 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-internal-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938293 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7wp\" (UniqueName: \"kubernetes.io/projected/5271075b-f655-47d8-b621-44711d9e495c-kube-api-access-ww7wp\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938344 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lngfj\" (UniqueName: \"kubernetes.io/projected/be6cee48-8743-49f2-a13b-6ce80981cfdb-kube-api-access-lngfj\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938461 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data-custom\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938506 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-public-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938589 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-combined-ca-bundle\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938631 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-combined-ca-bundle\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938692 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data-custom\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938753 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.938774 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:11 crc kubenswrapper[4729]: I0127 14:39:11.964669 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7fff5b4d49-29sw4"] Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041594 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041646 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-internal-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041797 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041831 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-public-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041924 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-internal-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.041973 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7wp\" (UniqueName: \"kubernetes.io/projected/5271075b-f655-47d8-b621-44711d9e495c-kube-api-access-ww7wp\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm4gt\" (UniqueName: \"kubernetes.io/projected/91704ade-1ead-4e59-b743-f93c932a4450-kube-api-access-sm4gt\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042568 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lngfj\" (UniqueName: \"kubernetes.io/projected/be6cee48-8743-49f2-a13b-6ce80981cfdb-kube-api-access-lngfj\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042613 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-combined-ca-bundle\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042654 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data-custom\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042685 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-public-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042758 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-combined-ca-bundle\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042788 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data-custom\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042813 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-combined-ca-bundle\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.042862 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data-custom\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.049459 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-combined-ca-bundle\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.049550 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-public-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.049643 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.049752 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-internal-tls-certs\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.053010 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.056144 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5271075b-f655-47d8-b621-44711d9e495c-config-data-custom\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.057563 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-combined-ca-bundle\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.060627 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be6cee48-8743-49f2-a13b-6ce80981cfdb-config-data-custom\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.070127 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lngfj\" (UniqueName: \"kubernetes.io/projected/be6cee48-8743-49f2-a13b-6ce80981cfdb-kube-api-access-lngfj\") pod \"heat-api-5ff89df78c-6425l\" (UID: \"be6cee48-8743-49f2-a13b-6ce80981cfdb\") " pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.075859 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7wp\" (UniqueName: \"kubernetes.io/projected/5271075b-f655-47d8-b621-44711d9e495c-kube-api-access-ww7wp\") pod \"heat-engine-795d549794-t2xb4\" (UID: \"5271075b-f655-47d8-b621-44711d9e495c\") " pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.139045 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.146799 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-internal-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.147043 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.147091 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-public-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.147228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm4gt\" (UniqueName: \"kubernetes.io/projected/91704ade-1ead-4e59-b743-f93c932a4450-kube-api-access-sm4gt\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.147352 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-combined-ca-bundle\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.147605 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data-custom\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.152751 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.153372 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-internal-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.155683 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-public-tls-certs\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.160665 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-combined-ca-bundle\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.164517 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.172627 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91704ade-1ead-4e59-b743-f93c932a4450-config-data-custom\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.173731 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm4gt\" (UniqueName: \"kubernetes.io/projected/91704ade-1ead-4e59-b743-f93c932a4450-kube-api-access-sm4gt\") pod \"heat-cfnapi-7fff5b4d49-29sw4\" (UID: \"91704ade-1ead-4e59-b743-f93c932a4450\") " pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.265492 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.322506 4729 generic.go:334] "Generic (PLEG): container finished" podID="304c6640-284f-4f67-8015-4517b2aaf742" containerID="96ae3b4c5df12278f3f4355c3e4139517116de5745108e201e9b0bbfa28aa550" exitCode=0 Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.322610 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" event={"ID":"304c6640-284f-4f67-8015-4517b2aaf742","Type":"ContainerDied","Data":"96ae3b4c5df12278f3f4355c3e4139517116de5745108e201e9b0bbfa28aa550"} Jan 27 14:39:12 crc kubenswrapper[4729]: I0127 14:39:12.323780 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" event={"ID":"304c6640-284f-4f67-8015-4517b2aaf742","Type":"ContainerStarted","Data":"4636cc6db1d9a414cfe9c6a73b7cf1589efe231a8015991d0e468a8f3906324d"} Jan 27 14:39:13 crc kubenswrapper[4729]: I0127 14:39:13.541444 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-795d549794-t2xb4"] Jan 27 14:39:13 crc kubenswrapper[4729]: W0127 14:39:13.681769 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91704ade_1ead_4e59_b743_f93c932a4450.slice/crio-e27496a41a3327f92ceb29e00e34ad42fc72c628ea4041874ba56ab6b74d3484 WatchSource:0}: Error finding container e27496a41a3327f92ceb29e00e34ad42fc72c628ea4041874ba56ab6b74d3484: Status 404 returned error can't find the container with id e27496a41a3327f92ceb29e00e34ad42fc72c628ea4041874ba56ab6b74d3484 Jan 27 14:39:13 crc kubenswrapper[4729]: I0127 14:39:13.681794 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7fff5b4d49-29sw4"] Jan 27 14:39:13 crc kubenswrapper[4729]: W0127 14:39:13.706820 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe6cee48_8743_49f2_a13b_6ce80981cfdb.slice/crio-06c68271fea25eeb00f2d4faa2d4ea11f6722eb8bc45625a5a286a4d6c561b9c WatchSource:0}: Error finding container 06c68271fea25eeb00f2d4faa2d4ea11f6722eb8bc45625a5a286a4d6c561b9c: Status 404 returned error can't find the container with id 06c68271fea25eeb00f2d4faa2d4ea11f6722eb8bc45625a5a286a4d6c561b9c Jan 27 14:39:13 crc kubenswrapper[4729]: I0127 14:39:13.707337 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5ff89df78c-6425l"] Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.382686 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ff89df78c-6425l" event={"ID":"be6cee48-8743-49f2-a13b-6ce80981cfdb","Type":"ContainerStarted","Data":"06c68271fea25eeb00f2d4faa2d4ea11f6722eb8bc45625a5a286a4d6c561b9c"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.386169 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-795d549794-t2xb4" event={"ID":"5271075b-f655-47d8-b621-44711d9e495c","Type":"ContainerStarted","Data":"4919778236f521855b58c09fb02b4aa90884e1f709465fd9793c6356b1b5275e"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.386219 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-795d549794-t2xb4" event={"ID":"5271075b-f655-47d8-b621-44711d9e495c","Type":"ContainerStarted","Data":"ddab394687d9bffbe9e1dec3ae1a55cffe168b5c365e4a08f4682716e09ca32e"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.386288 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.388803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" event={"ID":"304c6640-284f-4f67-8015-4517b2aaf742","Type":"ContainerStarted","Data":"0c4e7b4bd33419dcdf26943f529ab1e14e4205f7b301fc4eca18f0371a92d181"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.389068 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.391626 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bf3a8e-2abb-4659-9719-fdffb80a92b1","Type":"ContainerStarted","Data":"1aa80672c05fa397dbd9db2df5f29bd1a4b66a50125d67f6ab8e313bb3eff411"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.393461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" event={"ID":"91704ade-1ead-4e59-b743-f93c932a4450","Type":"ContainerStarted","Data":"e27496a41a3327f92ceb29e00e34ad42fc72c628ea4041874ba56ab6b74d3484"} Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.418115 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-795d549794-t2xb4" podStartSLOduration=3.418087756 podStartE2EDuration="3.418087756s" podCreationTimestamp="2026-01-27 14:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:39:14.40580496 +0000 UTC m=+2040.989995964" watchObservedRunningTime="2026-01-27 14:39:14.418087756 +0000 UTC m=+2041.002278750" Jan 27 14:39:14 crc kubenswrapper[4729]: I0127 14:39:14.450314 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" podStartSLOduration=4.450266523 podStartE2EDuration="4.450266523s" podCreationTimestamp="2026-01-27 14:39:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:39:14.430498055 +0000 UTC m=+2041.014689059" watchObservedRunningTime="2026-01-27 14:39:14.450266523 +0000 UTC m=+2041.034457527" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.420900 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bf3a8e-2abb-4659-9719-fdffb80a92b1","Type":"ContainerStarted","Data":"1dbd93ebf9f7df4c9b5d1d5b39af33f90000e0609c3e7e4c03ba6e697f3324da"} Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.423117 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.431146 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" event={"ID":"91704ade-1ead-4e59-b743-f93c932a4450","Type":"ContainerStarted","Data":"3eed69dbde34ed61e58997de517e245f795877a5bbcec8590b30e68916431049"} Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.432242 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.437961 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5ff89df78c-6425l" event={"ID":"be6cee48-8743-49f2-a13b-6ce80981cfdb","Type":"ContainerStarted","Data":"4ea213bc066dac9e7865c80100f74aa7cdcdcce3aea61e60d8f783e51b1b1e9b"} Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.438110 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.464658 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=17.458159142 podStartE2EDuration="32.46463964s" podCreationTimestamp="2026-01-27 14:38:44 +0000 UTC" firstStartedPulling="2026-01-27 14:39:01.058684566 +0000 UTC m=+2027.642875570" lastFinishedPulling="2026-01-27 14:39:16.065165054 +0000 UTC m=+2042.649356068" observedRunningTime="2026-01-27 14:39:16.445987491 +0000 UTC m=+2043.030178495" watchObservedRunningTime="2026-01-27 14:39:16.46463964 +0000 UTC m=+2043.048830644" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.490628 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5ff89df78c-6425l" podStartSLOduration=3.216947547 podStartE2EDuration="5.490609007s" podCreationTimestamp="2026-01-27 14:39:11 +0000 UTC" firstStartedPulling="2026-01-27 14:39:13.709393054 +0000 UTC m=+2040.293584058" lastFinishedPulling="2026-01-27 14:39:15.983054514 +0000 UTC m=+2042.567245518" observedRunningTime="2026-01-27 14:39:16.47280673 +0000 UTC m=+2043.056997744" watchObservedRunningTime="2026-01-27 14:39:16.490609007 +0000 UTC m=+2043.074800011" Jan 27 14:39:16 crc kubenswrapper[4729]: I0127 14:39:16.514616 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" podStartSLOduration=3.216757673 podStartE2EDuration="5.514592894s" podCreationTimestamp="2026-01-27 14:39:11 +0000 UTC" firstStartedPulling="2026-01-27 14:39:13.688583399 +0000 UTC m=+2040.272774403" lastFinishedPulling="2026-01-27 14:39:15.98641862 +0000 UTC m=+2042.570609624" observedRunningTime="2026-01-27 14:39:16.498351616 +0000 UTC m=+2043.082542630" watchObservedRunningTime="2026-01-27 14:39:16.514592894 +0000 UTC m=+2043.098783908" Jan 27 14:39:20 crc kubenswrapper[4729]: I0127 14:39:20.758106 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:20 crc kubenswrapper[4729]: I0127 14:39:20.838673 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:39:20 crc kubenswrapper[4729]: I0127 14:39:20.839547 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="dnsmasq-dns" containerID="cri-o://f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868" gracePeriod=10 Jan 27 14:39:20 crc kubenswrapper[4729]: E0127 14:39:20.982016 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e677e_e80b_48ad_9f73_1242976939c5.slice/crio-f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod060e677e_e80b_48ad_9f73_1242976939c5.slice/crio-conmon-f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.036284 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-p9jfn"] Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.038516 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.100952 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-p9jfn"] Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142601 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9fvr\" (UniqueName: \"kubernetes.io/projected/31add6d0-b976-4106-93f2-d9f13b3de020-kube-api-access-j9fvr\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142716 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142755 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142778 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142902 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-config\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142960 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.142989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245023 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245186 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-config\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245257 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245292 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245394 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9fvr\" (UniqueName: \"kubernetes.io/projected/31add6d0-b976-4106-93f2-d9f13b3de020-kube-api-access-j9fvr\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245455 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.245494 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.246421 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.247146 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.247254 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-config\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.247408 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.248205 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.248299 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/31add6d0-b976-4106-93f2-d9f13b3de020-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.272730 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9fvr\" (UniqueName: \"kubernetes.io/projected/31add6d0-b976-4106-93f2-d9f13b3de020-kube-api-access-j9fvr\") pod \"dnsmasq-dns-5d75f767dc-p9jfn\" (UID: \"31add6d0-b976-4106-93f2-d9f13b3de020\") " pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.372410 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.522986 4729 generic.go:334] "Generic (PLEG): container finished" podID="060e677e-e80b-48ad-9f73-1242976939c5" containerID="f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868" exitCode=0 Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.523276 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" event={"ID":"060e677e-e80b-48ad-9f73-1242976939c5","Type":"ContainerDied","Data":"f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868"} Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.685551 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.798206 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cx2m\" (UniqueName: \"kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.798301 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.799485 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.799617 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.799724 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.799823 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config\") pod \"060e677e-e80b-48ad-9f73-1242976939c5\" (UID: \"060e677e-e80b-48ad-9f73-1242976939c5\") " Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.804703 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m" (OuterVolumeSpecName: "kube-api-access-2cx2m") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "kube-api-access-2cx2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.906160 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cx2m\" (UniqueName: \"kubernetes.io/projected/060e677e-e80b-48ad-9f73-1242976939c5-kube-api-access-2cx2m\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.910320 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:21 crc kubenswrapper[4729]: I0127 14:39:21.935266 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.025329 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.025369 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.140206 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.160917 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.224644 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config" (OuterVolumeSpecName: "config") pod "060e677e-e80b-48ad-9f73-1242976939c5" (UID: "060e677e-e80b-48ad-9f73-1242976939c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.230770 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.230816 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.230834 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/060e677e-e80b-48ad-9f73-1242976939c5-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.350568 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-p9jfn"] Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.537242 4729 generic.go:334] "Generic (PLEG): container finished" podID="31add6d0-b976-4106-93f2-d9f13b3de020" containerID="ff42bf6fd75dcf18dd9396bf6aebb28d5274f93a38a90cda3eee610b96c97af4" exitCode=0 Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.537311 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" event={"ID":"31add6d0-b976-4106-93f2-d9f13b3de020","Type":"ContainerDied","Data":"ff42bf6fd75dcf18dd9396bf6aebb28d5274f93a38a90cda3eee610b96c97af4"} Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.537343 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" event={"ID":"31add6d0-b976-4106-93f2-d9f13b3de020","Type":"ContainerStarted","Data":"5cbdbdaa3223c3a75c17cf625edd6f45d6381846cfd805db86fb78a514f23d1f"} Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.540803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" event={"ID":"060e677e-e80b-48ad-9f73-1242976939c5","Type":"ContainerDied","Data":"6a8fef118ba792178ad218f54caf453d54f583a91c6b245eb2a0991e7cf77bb9"} Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.540867 4729 scope.go:117] "RemoveContainer" containerID="f168f54d3a29dfb87c7ff8d99a9c6c9053b081627fcd3d49e94d6af5929b3868" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.540980 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-7dvpt" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.583683 4729 scope.go:117] "RemoveContainer" containerID="6d9619b42295ad0e3ab7ceecd3304a9079a371dbba1fe4f1f4a845a234518432" Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.611018 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:39:22 crc kubenswrapper[4729]: I0127 14:39:22.641059 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-7dvpt"] Jan 27 14:39:23 crc kubenswrapper[4729]: I0127 14:39:23.581852 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" event={"ID":"31add6d0-b976-4106-93f2-d9f13b3de020","Type":"ContainerStarted","Data":"74c33329f28b28a60bf61ecc55dd7ad08793a661230a11450be86720c340ea3b"} Jan 27 14:39:23 crc kubenswrapper[4729]: I0127 14:39:23.582395 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:23 crc kubenswrapper[4729]: I0127 14:39:23.628071 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" podStartSLOduration=3.628048651 podStartE2EDuration="3.628048651s" podCreationTimestamp="2026-01-27 14:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:39:23.611225478 +0000 UTC m=+2050.195416502" watchObservedRunningTime="2026-01-27 14:39:23.628048651 +0000 UTC m=+2050.212239665" Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.063429 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="060e677e-e80b-48ad-9f73-1242976939c5" path="/var/lib/kubelet/pods/060e677e-e80b-48ad-9f73-1242976939c5/volumes" Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.653272 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7fff5b4d49-29sw4" Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.659488 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5ff89df78c-6425l" Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.728557 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.729127 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerName="heat-cfnapi" containerID="cri-o://7663c3e0b22990aab84ea7e2654791b5fc9f5e1cf8019071e5dbec8bb277fbf0" gracePeriod=60 Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.772119 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:39:24 crc kubenswrapper[4729]: I0127 14:39:24.772379 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-55554cf5d6-rht7q" podUID="21a1bec4-36d5-438f-8575-665c6edad962" containerName="heat-api" containerID="cri-o://793730f20c62a769cf31857f01a74079c290c34d6aec6ec0ed627da23c338e04" gracePeriod=60 Jan 27 14:39:27 crc kubenswrapper[4729]: I0127 14:39:27.966764 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-55554cf5d6-rht7q" podUID="21a1bec4-36d5-438f-8575-665c6edad962" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.220:8004/healthcheck\": read tcp 10.217.0.2:50760->10.217.0.220:8004: read: connection reset by peer" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.229157 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.221:8000/healthcheck\": read tcp 10.217.0.2:32928->10.217.0.221:8000: read: connection reset by peer" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.650273 4729 generic.go:334] "Generic (PLEG): container finished" podID="21a1bec4-36d5-438f-8575-665c6edad962" containerID="793730f20c62a769cf31857f01a74079c290c34d6aec6ec0ed627da23c338e04" exitCode=0 Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.650367 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55554cf5d6-rht7q" event={"ID":"21a1bec4-36d5-438f-8575-665c6edad962","Type":"ContainerDied","Data":"793730f20c62a769cf31857f01a74079c290c34d6aec6ec0ed627da23c338e04"} Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.650400 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-55554cf5d6-rht7q" event={"ID":"21a1bec4-36d5-438f-8575-665c6edad962","Type":"ContainerDied","Data":"8bbafd3084ae569ed949d047dfd6fa7af73a9616e38cba76722b769415dd38e1"} Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.650414 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bbafd3084ae569ed949d047dfd6fa7af73a9616e38cba76722b769415dd38e1" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.652253 4729 generic.go:334] "Generic (PLEG): container finished" podID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerID="7663c3e0b22990aab84ea7e2654791b5fc9f5e1cf8019071e5dbec8bb277fbf0" exitCode=0 Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.652296 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" event={"ID":"5f89791b-c8ab-4897-8989-b3aab9352e5e","Type":"ContainerDied","Data":"7663c3e0b22990aab84ea7e2654791b5fc9f5e1cf8019071e5dbec8bb277fbf0"} Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.670287 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.824295 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.824654 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.824835 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.824889 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.824989 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmvhq\" (UniqueName: \"kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.825110 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle\") pod \"21a1bec4-36d5-438f-8575-665c6edad962\" (UID: \"21a1bec4-36d5-438f-8575-665c6edad962\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.831296 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq" (OuterVolumeSpecName: "kube-api-access-pmvhq") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "kube-api-access-pmvhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.832384 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.853911 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.862663 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.910034 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data" (OuterVolumeSpecName: "config-data") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935097 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935225 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935305 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935420 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935457 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjwsw\" (UniqueName: \"kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.935579 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data\") pod \"5f89791b-c8ab-4897-8989-b3aab9352e5e\" (UID: \"5f89791b-c8ab-4897-8989-b3aab9352e5e\") " Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.936572 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.936595 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.936607 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.936619 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmvhq\" (UniqueName: \"kubernetes.io/projected/21a1bec4-36d5-438f-8575-665c6edad962-kube-api-access-pmvhq\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.946584 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.953631 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:28 crc kubenswrapper[4729]: I0127 14:39:28.967073 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw" (OuterVolumeSpecName: "kube-api-access-pjwsw") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "kube-api-access-pjwsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.024094 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "21a1bec4-36d5-438f-8575-665c6edad962" (UID: "21a1bec4-36d5-438f-8575-665c6edad962"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.039414 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.039505 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjwsw\" (UniqueName: \"kubernetes.io/projected/5f89791b-c8ab-4897-8989-b3aab9352e5e-kube-api-access-pjwsw\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.039520 4729 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.039532 4729 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21a1bec4-36d5-438f-8575-665c6edad962-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.049036 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.118573 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.131057 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.133976 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data" (OuterVolumeSpecName: "config-data") pod "5f89791b-c8ab-4897-8989-b3aab9352e5e" (UID: "5f89791b-c8ab-4897-8989-b3aab9352e5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.142083 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.142117 4729 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.142127 4729 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.142136 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f89791b-c8ab-4897-8989-b3aab9352e5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.665194 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-55554cf5d6-rht7q" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.665388 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" event={"ID":"5f89791b-c8ab-4897-8989-b3aab9352e5e","Type":"ContainerDied","Data":"f5f8b78b07a028f192a486c6f27117bbec1c9da53de3ca4a4c818eea75d33442"} Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.665457 4729 scope.go:117] "RemoveContainer" containerID="7663c3e0b22990aab84ea7e2654791b5fc9f5e1cf8019071e5dbec8bb277fbf0" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.665413 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fc78b9bfd-chbvw" Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.709259 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.720298 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-55554cf5d6-rht7q"] Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.731419 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:39:29 crc kubenswrapper[4729]: I0127 14:39:29.742619 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5fc78b9bfd-chbvw"] Jan 27 14:39:30 crc kubenswrapper[4729]: I0127 14:39:30.064461 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a1bec4-36d5-438f-8575-665c6edad962" path="/var/lib/kubelet/pods/21a1bec4-36d5-438f-8575-665c6edad962/volumes" Jan 27 14:39:30 crc kubenswrapper[4729]: I0127 14:39:30.065378 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" path="/var/lib/kubelet/pods/5f89791b-c8ab-4897-8989-b3aab9352e5e/volumes" Jan 27 14:39:31 crc kubenswrapper[4729]: I0127 14:39:31.374111 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-p9jfn" Jan 27 14:39:31 crc kubenswrapper[4729]: I0127 14:39:31.449639 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:31 crc kubenswrapper[4729]: I0127 14:39:31.449927 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="dnsmasq-dns" containerID="cri-o://0c4e7b4bd33419dcdf26943f529ab1e14e4205f7b301fc4eca18f0371a92d181" gracePeriod=10 Jan 27 14:39:31 crc kubenswrapper[4729]: I0127 14:39:31.696937 4729 generic.go:334] "Generic (PLEG): container finished" podID="304c6640-284f-4f67-8015-4517b2aaf742" containerID="0c4e7b4bd33419dcdf26943f529ab1e14e4205f7b301fc4eca18f0371a92d181" exitCode=0 Jan 27 14:39:31 crc kubenswrapper[4729]: I0127 14:39:31.697003 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" event={"ID":"304c6640-284f-4f67-8015-4517b2aaf742","Type":"ContainerDied","Data":"0c4e7b4bd33419dcdf26943f529ab1e14e4205f7b301fc4eca18f0371a92d181"} Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.189294 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-795d549794-t2xb4" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.198227 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.288205 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.288459 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" containerID="cri-o://23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" gracePeriod=60 Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.324743 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.324892 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh64b\" (UniqueName: \"kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.324983 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.325028 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.325133 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.325199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.325237 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0\") pod \"304c6640-284f-4f67-8015-4517b2aaf742\" (UID: \"304c6640-284f-4f67-8015-4517b2aaf742\") " Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.367210 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b" (OuterVolumeSpecName: "kube-api-access-qh64b") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "kube-api-access-qh64b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.420345 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.435076 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.435105 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh64b\" (UniqueName: \"kubernetes.io/projected/304c6640-284f-4f67-8015-4517b2aaf742-kube-api-access-qh64b\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.439421 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.442495 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.485094 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.486256 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config" (OuterVolumeSpecName: "config") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.486250 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "304c6640-284f-4f67-8015-4517b2aaf742" (UID: "304c6640-284f-4f67-8015-4517b2aaf742"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.537675 4729 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.537905 4729 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-config\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.537994 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.538075 4729 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.538132 4729 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/304c6640-284f-4f67-8015-4517b2aaf742-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.710437 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" event={"ID":"304c6640-284f-4f67-8015-4517b2aaf742","Type":"ContainerDied","Data":"4636cc6db1d9a414cfe9c6a73b7cf1589efe231a8015991d0e468a8f3906324d"} Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.710681 4729 scope.go:117] "RemoveContainer" containerID="0c4e7b4bd33419dcdf26943f529ab1e14e4205f7b301fc4eca18f0371a92d181" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.710912 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-l8jxj" Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.783630 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.800287 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-l8jxj"] Jan 27 14:39:32 crc kubenswrapper[4729]: I0127 14:39:32.831071 4729 scope.go:117] "RemoveContainer" containerID="96ae3b4c5df12278f3f4355c3e4139517116de5745108e201e9b0bbfa28aa550" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.105268 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q5jzg"] Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106261 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a1bec4-36d5-438f-8575-665c6edad962" containerName="heat-api" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106278 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a1bec4-36d5-438f-8575-665c6edad962" containerName="heat-api" Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106302 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106308 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106318 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106326 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106365 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="init" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106371 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="init" Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106381 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="init" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106387 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="init" Jan 27 14:39:33 crc kubenswrapper[4729]: E0127 14:39:33.106404 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerName="heat-cfnapi" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106409 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerName="heat-cfnapi" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106617 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a1bec4-36d5-438f-8575-665c6edad962" containerName="heat-api" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106627 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="304c6640-284f-4f67-8015-4517b2aaf742" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106652 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="060e677e-e80b-48ad-9f73-1242976939c5" containerName="dnsmasq-dns" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.106671 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f89791b-c8ab-4897-8989-b3aab9352e5e" containerName="heat-cfnapi" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.108757 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.154774 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jzg"] Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.164342 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-catalog-content\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.164615 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-utilities\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.164962 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7rb\" (UniqueName: \"kubernetes.io/projected/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-kube-api-access-vb7rb\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.267594 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-utilities\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.268092 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7rb\" (UniqueName: \"kubernetes.io/projected/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-kube-api-access-vb7rb\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.268253 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-utilities\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.268509 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-catalog-content\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.268887 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-catalog-content\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.287733 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7rb\" (UniqueName: \"kubernetes.io/projected/9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb-kube-api-access-vb7rb\") pod \"certified-operators-q5jzg\" (UID: \"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb\") " pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.430507 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:33 crc kubenswrapper[4729]: I0127 14:39:33.959947 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jzg"] Jan 27 14:39:34 crc kubenswrapper[4729]: I0127 14:39:34.067985 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="304c6640-284f-4f67-8015-4517b2aaf742" path="/var/lib/kubelet/pods/304c6640-284f-4f67-8015-4517b2aaf742/volumes" Jan 27 14:39:34 crc kubenswrapper[4729]: I0127 14:39:34.739055 4729 generic.go:334] "Generic (PLEG): container finished" podID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerID="bafa1b8e3ac1bb9b69e81b6216c7984e6f776c319db87ed31a030e89d6165544" exitCode=0 Jan 27 14:39:34 crc kubenswrapper[4729]: I0127 14:39:34.739162 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jzg" event={"ID":"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb","Type":"ContainerDied","Data":"bafa1b8e3ac1bb9b69e81b6216c7984e6f776c319db87ed31a030e89d6165544"} Jan 27 14:39:34 crc kubenswrapper[4729]: I0127 14:39:34.739424 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jzg" event={"ID":"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb","Type":"ContainerStarted","Data":"c1dbcd15a6bc63f052c616ee7ca7203c8648fde78da135f0077c79a2ff5d466a"} Jan 27 14:39:36 crc kubenswrapper[4729]: E0127 14:39:36.415682 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:36 crc kubenswrapper[4729]: E0127 14:39:36.419266 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:36 crc kubenswrapper[4729]: E0127 14:39:36.421486 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:36 crc kubenswrapper[4729]: E0127 14:39:36.421531 4729 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:39:36 crc kubenswrapper[4729]: I0127 14:39:36.969503 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-rkrfd"] Jan 27 14:39:36 crc kubenswrapper[4729]: I0127 14:39:36.986868 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-rkrfd"] Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.051283 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-m7xd4"] Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.053870 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.056860 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.124601 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-m7xd4"] Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.148135 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.149826 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbqsr\" (UniqueName: \"kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.149981 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.150037 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.252648 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.252916 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbqsr\" (UniqueName: \"kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.253016 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.253059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.263264 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.265024 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.267049 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.283678 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbqsr\" (UniqueName: \"kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr\") pod \"aodh-db-sync-m7xd4\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:37 crc kubenswrapper[4729]: I0127 14:39:37.385744 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:38 crc kubenswrapper[4729]: W0127 14:39:38.023299 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac0aeb60_0855_486a_b23d_790410009406.slice/crio-5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4 WatchSource:0}: Error finding container 5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4: Status 404 returned error can't find the container with id 5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4 Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.037913 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-m7xd4"] Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.083328 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59bdaa5a-26c0-4a26-af8d-b6a4306f904c" path="/var/lib/kubelet/pods/59bdaa5a-26c0-4a26-af8d-b6a4306f904c/volumes" Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.811544 4729 generic.go:334] "Generic (PLEG): container finished" podID="75b1f41d-64ad-4dec-a082-9e81438dfe0f" containerID="295af4352ad75f7bb7249bd7bfd1ad0f798db29f086a345aa003372075cec184" exitCode=0 Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.811628 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"75b1f41d-64ad-4dec-a082-9e81438dfe0f","Type":"ContainerDied","Data":"295af4352ad75f7bb7249bd7bfd1ad0f798db29f086a345aa003372075cec184"} Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.815771 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m7xd4" event={"ID":"ac0aeb60-0855-486a-b23d-790410009406","Type":"ContainerStarted","Data":"5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4"} Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.825492 4729 generic.go:334] "Generic (PLEG): container finished" podID="fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464" containerID="c2337f7dc548e48eded726c58f2ccf48d6b2a558fd6731bf0fdc9862c5cc9ee8" exitCode=0 Jan 27 14:39:38 crc kubenswrapper[4729]: I0127 14:39:38.825534 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464","Type":"ContainerDied","Data":"c2337f7dc548e48eded726c58f2ccf48d6b2a558fd6731bf0fdc9862c5cc9ee8"} Jan 27 14:39:45 crc kubenswrapper[4729]: I0127 14:39:45.232285 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.104514 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw"] Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.106772 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.109458 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.109895 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.110495 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.110615 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.148780 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw"] Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.211086 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8vl\" (UniqueName: \"kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.211180 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.211488 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.211617 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.314229 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.314461 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.314570 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.314685 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr8vl\" (UniqueName: \"kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.322163 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.323055 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.338318 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.356542 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr8vl\" (UniqueName: \"kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: E0127 14:39:46.414336 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:46 crc kubenswrapper[4729]: E0127 14:39:46.415606 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:46 crc kubenswrapper[4729]: E0127 14:39:46.416591 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:46 crc kubenswrapper[4729]: E0127 14:39:46.416629 4729 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.451668 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.958757 4729 generic.go:334] "Generic (PLEG): container finished" podID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerID="13f3457f88a24e96ada8dd58a215ea1ba0fa29e837c213d565499c222236ce74" exitCode=0 Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.958946 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jzg" event={"ID":"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb","Type":"ContainerDied","Data":"13f3457f88a24e96ada8dd58a215ea1ba0fa29e837c213d565499c222236ce74"} Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.970230 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"75b1f41d-64ad-4dec-a082-9e81438dfe0f","Type":"ContainerStarted","Data":"e1a65eaa48d21806ec921c696d6f33428b076d153c87bd15ef349a130d36db5a"} Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.971326 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 27 14:39:46 crc kubenswrapper[4729]: I0127 14:39:46.993929 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m7xd4" event={"ID":"ac0aeb60-0855-486a-b23d-790410009406","Type":"ContainerStarted","Data":"5df90376480176487af8b7547bceb7121703a187a5682c983c6d34fc9c499804"} Jan 27 14:39:47 crc kubenswrapper[4729]: I0127 14:39:47.000724 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464","Type":"ContainerStarted","Data":"ca1864e1c9e299a1d003756672317dfda244e8692561ad5e51e464868790d45a"} Jan 27 14:39:47 crc kubenswrapper[4729]: I0127 14:39:47.000955 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:39:47 crc kubenswrapper[4729]: I0127 14:39:47.022221 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=45.022182129 podStartE2EDuration="45.022182129s" podCreationTimestamp="2026-01-27 14:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:39:47.013966959 +0000 UTC m=+2073.598157963" watchObservedRunningTime="2026-01-27 14:39:47.022182129 +0000 UTC m=+2073.606373133" Jan 27 14:39:47 crc kubenswrapper[4729]: I0127 14:39:47.048255 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-m7xd4" podStartSLOduration=2.291564014 podStartE2EDuration="10.048239099s" podCreationTimestamp="2026-01-27 14:39:37 +0000 UTC" firstStartedPulling="2026-01-27 14:39:38.02636987 +0000 UTC m=+2064.610560874" lastFinishedPulling="2026-01-27 14:39:45.783044955 +0000 UTC m=+2072.367235959" observedRunningTime="2026-01-27 14:39:47.028603045 +0000 UTC m=+2073.612794069" watchObservedRunningTime="2026-01-27 14:39:47.048239099 +0000 UTC m=+2073.632430103" Jan 27 14:39:47 crc kubenswrapper[4729]: I0127 14:39:47.072411 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.07239326 podStartE2EDuration="45.07239326s" podCreationTimestamp="2026-01-27 14:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:39:47.056483731 +0000 UTC m=+2073.640674735" watchObservedRunningTime="2026-01-27 14:39:47.07239326 +0000 UTC m=+2073.656584264" Jan 27 14:39:49 crc kubenswrapper[4729]: I0127 14:39:49.785805 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw"] Jan 27 14:39:50 crc kubenswrapper[4729]: I0127 14:39:50.040524 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" event={"ID":"90776e5b-71cf-43d1-969b-16278afed3cf","Type":"ContainerStarted","Data":"fd73f6142baf6b25e84851102fe66bdddbef1b3340c2e4b2000eef057c79753f"} Jan 27 14:39:52 crc kubenswrapper[4729]: I0127 14:39:52.074301 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q5jzg" event={"ID":"9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb","Type":"ContainerStarted","Data":"ee07c19d34f1bf6a797cf8717306b3a8514a06db36e2f9ffe64e43620c030690"} Jan 27 14:39:52 crc kubenswrapper[4729]: I0127 14:39:52.111695 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q5jzg" podStartSLOduration=2.647058654 podStartE2EDuration="19.111672353s" podCreationTimestamp="2026-01-27 14:39:33 +0000 UTC" firstStartedPulling="2026-01-27 14:39:34.741649407 +0000 UTC m=+2061.325840411" lastFinishedPulling="2026-01-27 14:39:51.206263106 +0000 UTC m=+2077.790454110" observedRunningTime="2026-01-27 14:39:52.082849603 +0000 UTC m=+2078.667040607" watchObservedRunningTime="2026-01-27 14:39:52.111672353 +0000 UTC m=+2078.695863357" Jan 27 14:39:53 crc kubenswrapper[4729]: I0127 14:39:53.432008 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:53 crc kubenswrapper[4729]: I0127 14:39:53.432325 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:39:54 crc kubenswrapper[4729]: I0127 14:39:54.511270 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q5jzg" podUID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerName="registry-server" probeResult="failure" output=< Jan 27 14:39:54 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:39:54 crc kubenswrapper[4729]: > Jan 27 14:39:55 crc kubenswrapper[4729]: I0127 14:39:55.125506 4729 generic.go:334] "Generic (PLEG): container finished" podID="ac0aeb60-0855-486a-b23d-790410009406" containerID="5df90376480176487af8b7547bceb7121703a187a5682c983c6d34fc9c499804" exitCode=0 Jan 27 14:39:55 crc kubenswrapper[4729]: I0127 14:39:55.125586 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m7xd4" event={"ID":"ac0aeb60-0855-486a-b23d-790410009406","Type":"ContainerDied","Data":"5df90376480176487af8b7547bceb7121703a187a5682c983c6d34fc9c499804"} Jan 27 14:39:55 crc kubenswrapper[4729]: I0127 14:39:55.130162 4729 generic.go:334] "Generic (PLEG): container finished" podID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" exitCode=0 Jan 27 14:39:55 crc kubenswrapper[4729]: I0127 14:39:55.130187 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" event={"ID":"24dbee0b-07e8-420f-af0a-83c8d5ccf70f","Type":"ContainerDied","Data":"23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735"} Jan 27 14:39:56 crc kubenswrapper[4729]: E0127 14:39:56.419805 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735 is running failed: container process not found" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:56 crc kubenswrapper[4729]: E0127 14:39:56.421535 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735 is running failed: container process not found" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:56 crc kubenswrapper[4729]: E0127 14:39:56.423713 4729 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735 is running failed: container process not found" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 27 14:39:56 crc kubenswrapper[4729]: E0127 14:39:56.423755 4729 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.510672 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.538447 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle\") pod \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.538568 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom\") pod \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.538602 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data\") pod \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.538867 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79vkh\" (UniqueName: \"kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh\") pod \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\" (UID: \"24dbee0b-07e8-420f-af0a-83c8d5ccf70f\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.546716 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh" (OuterVolumeSpecName: "kube-api-access-79vkh") pod "24dbee0b-07e8-420f-af0a-83c8d5ccf70f" (UID: "24dbee0b-07e8-420f-af0a-83c8d5ccf70f"). InnerVolumeSpecName "kube-api-access-79vkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.571401 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "24dbee0b-07e8-420f-af0a-83c8d5ccf70f" (UID: "24dbee0b-07e8-420f-af0a-83c8d5ccf70f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.645665 4729 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.645696 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79vkh\" (UniqueName: \"kubernetes.io/projected/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-kube-api-access-79vkh\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.662276 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24dbee0b-07e8-420f-af0a-83c8d5ccf70f" (UID: "24dbee0b-07e8-420f-af0a-83c8d5ccf70f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.693625 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.702901 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data" (OuterVolumeSpecName: "config-data") pod "24dbee0b-07e8-420f-af0a-83c8d5ccf70f" (UID: "24dbee0b-07e8-420f-af0a-83c8d5ccf70f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.746862 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts\") pod \"ac0aeb60-0855-486a-b23d-790410009406\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.746965 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle\") pod \"ac0aeb60-0855-486a-b23d-790410009406\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.747163 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data\") pod \"ac0aeb60-0855-486a-b23d-790410009406\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.747264 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbqsr\" (UniqueName: \"kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr\") pod \"ac0aeb60-0855-486a-b23d-790410009406\" (UID: \"ac0aeb60-0855-486a-b23d-790410009406\") " Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.747812 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.747829 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24dbee0b-07e8-420f-af0a-83c8d5ccf70f-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.751543 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts" (OuterVolumeSpecName: "scripts") pod "ac0aeb60-0855-486a-b23d-790410009406" (UID: "ac0aeb60-0855-486a-b23d-790410009406"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.751712 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr" (OuterVolumeSpecName: "kube-api-access-rbqsr") pod "ac0aeb60-0855-486a-b23d-790410009406" (UID: "ac0aeb60-0855-486a-b23d-790410009406"). InnerVolumeSpecName "kube-api-access-rbqsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.779950 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac0aeb60-0855-486a-b23d-790410009406" (UID: "ac0aeb60-0855-486a-b23d-790410009406"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.791031 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data" (OuterVolumeSpecName: "config-data") pod "ac0aeb60-0855-486a-b23d-790410009406" (UID: "ac0aeb60-0855-486a-b23d-790410009406"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.850087 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.850131 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.850144 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0aeb60-0855-486a-b23d-790410009406-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:56 crc kubenswrapper[4729]: I0127 14:39:56.850153 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbqsr\" (UniqueName: \"kubernetes.io/projected/ac0aeb60-0855-486a-b23d-790410009406-kube-api-access-rbqsr\") on node \"crc\" DevicePath \"\"" Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.194369 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" event={"ID":"24dbee0b-07e8-420f-af0a-83c8d5ccf70f","Type":"ContainerDied","Data":"2aec6916665028314b356c1bea6a1edd5bcbdf9ffe97bb004cd6d0a096cd68e9"} Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.194433 4729 scope.go:117] "RemoveContainer" containerID="23ddc62ab3e3c4ad6c59d758a96fc2ab6dd238eb52d8d0b9910ffbe3d0645735" Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.194476 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-d9cbb5dfc-kxwhh" Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.197196 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-m7xd4" event={"ID":"ac0aeb60-0855-486a-b23d-790410009406","Type":"ContainerDied","Data":"5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4"} Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.197225 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-m7xd4" Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.197233 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fbcd160b21c3ca4533e94d1559598ce8fd18d48c82f314287eb32521c2c9ca4" Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.243281 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.264056 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-d9cbb5dfc-kxwhh"] Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.278182 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.278519 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-api" containerID="cri-o://386ea332bce9cc4275f68ebca243629ae7bac2dd75d3a167410fea880f3a046c" gracePeriod=30 Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.279541 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-listener" containerID="cri-o://074e9cd7635bad89c24dc98ce10472fdb1437435842d8a3db4187f453baa064f" gracePeriod=30 Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.279653 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-notifier" containerID="cri-o://39b926b6eaa04d6f5bdef659507224f01935695b0ed457e08163eb269499d1ff" gracePeriod=30 Jan 27 14:39:57 crc kubenswrapper[4729]: I0127 14:39:57.279707 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-evaluator" containerID="cri-o://d8d24774e098daca70ccfe9ec0639e146e9ed676ba8f0562868f5d420f9fda94" gracePeriod=30 Jan 27 14:39:58 crc kubenswrapper[4729]: I0127 14:39:58.078957 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" path="/var/lib/kubelet/pods/24dbee0b-07e8-420f-af0a-83c8d5ccf70f/volumes" Jan 27 14:39:58 crc kubenswrapper[4729]: I0127 14:39:58.230958 4729 generic.go:334] "Generic (PLEG): container finished" podID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerID="386ea332bce9cc4275f68ebca243629ae7bac2dd75d3a167410fea880f3a046c" exitCode=0 Jan 27 14:39:58 crc kubenswrapper[4729]: I0127 14:39:58.231002 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerDied","Data":"386ea332bce9cc4275f68ebca243629ae7bac2dd75d3a167410fea880f3a046c"} Jan 27 14:39:59 crc kubenswrapper[4729]: I0127 14:39:59.255413 4729 generic.go:334] "Generic (PLEG): container finished" podID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerID="d8d24774e098daca70ccfe9ec0639e146e9ed676ba8f0562868f5d420f9fda94" exitCode=0 Jan 27 14:39:59 crc kubenswrapper[4729]: I0127 14:39:59.255497 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerDied","Data":"d8d24774e098daca70ccfe9ec0639e146e9ed676ba8f0562868f5d420f9fda94"} Jan 27 14:40:01 crc kubenswrapper[4729]: I0127 14:40:01.235149 4729 scope.go:117] "RemoveContainer" containerID="ac7329e4e741fe65f9695bd0e692d86682d0f6404db0ce56ab8d15f7a7ee265a" Jan 27 14:40:01 crc kubenswrapper[4729]: I0127 14:40:01.286407 4729 generic.go:334] "Generic (PLEG): container finished" podID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerID="39b926b6eaa04d6f5bdef659507224f01935695b0ed457e08163eb269499d1ff" exitCode=0 Jan 27 14:40:01 crc kubenswrapper[4729]: I0127 14:40:01.286494 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerDied","Data":"39b926b6eaa04d6f5bdef659507224f01935695b0ed457e08163eb269499d1ff"} Jan 27 14:40:02 crc kubenswrapper[4729]: I0127 14:40:02.635465 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.10:5671: connect: connection refused" Jan 27 14:40:03 crc kubenswrapper[4729]: I0127 14:40:03.253043 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="75b1f41d-64ad-4dec-a082-9e81438dfe0f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.11:5671: connect: connection refused" Jan 27 14:40:04 crc kubenswrapper[4729]: I0127 14:40:04.852766 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q5jzg" podUID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerName="registry-server" probeResult="failure" output=< Jan 27 14:40:04 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:40:04 crc kubenswrapper[4729]: > Jan 27 14:40:04 crc kubenswrapper[4729]: I0127 14:40:04.953973 4729 scope.go:117] "RemoveContainer" containerID="9e32dd7451d221d043d9724a8f9a6b7f7d9a0369c27ad7cc736ec0d9250da2f6" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.227862 4729 scope.go:117] "RemoveContainer" containerID="356261c680d35fde5ab345835cf1d45e880811180881256176404acaa9f35d70" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.270630 4729 scope.go:117] "RemoveContainer" containerID="5ac6f89f24eeeede04efd098de369bdc1a2d23a92357beb7e0e20e446cd5072e" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.312222 4729 scope.go:117] "RemoveContainer" containerID="dc858358caa87c48d541f1298e5c8e66d360e7f949d596bbe14d3e47febbd8e9" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.353481 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" event={"ID":"90776e5b-71cf-43d1-969b-16278afed3cf","Type":"ContainerStarted","Data":"dba80cf5335d43ae478771d9eaa736fc1d07b72430365757aba06938d4cd1e83"} Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.385493 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" podStartSLOduration=5.458068647 podStartE2EDuration="19.385471884s" podCreationTimestamp="2026-01-27 14:39:46 +0000 UTC" firstStartedPulling="2026-01-27 14:39:51.125631733 +0000 UTC m=+2077.709822737" lastFinishedPulling="2026-01-27 14:40:05.05303497 +0000 UTC m=+2091.637225974" observedRunningTime="2026-01-27 14:40:05.372007687 +0000 UTC m=+2091.956198691" watchObservedRunningTime="2026-01-27 14:40:05.385471884 +0000 UTC m=+2091.969662898" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.415151 4729 scope.go:117] "RemoveContainer" containerID="9e1a560e097c137694bb9d2b51a4a3e63b94f1421aa80b5afa787aed36c0174b" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.443621 4729 scope.go:117] "RemoveContainer" containerID="1f4c556366ec5aee38fde4e7b40629292b0e95a7e8ea657eb42bc8ac3bbf6bcd" Jan 27 14:40:05 crc kubenswrapper[4729]: I0127 14:40:05.475393 4729 scope.go:117] "RemoveContainer" containerID="538099ac1b22a1b548e632269a4ac1a5f801ecf28b1c6e3913d34b01c6ec5efd" Jan 27 14:40:06 crc kubenswrapper[4729]: I0127 14:40:06.379769 4729 generic.go:334] "Generic (PLEG): container finished" podID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerID="074e9cd7635bad89c24dc98ce10472fdb1437435842d8a3db4187f453baa064f" exitCode=0 Jan 27 14:40:06 crc kubenswrapper[4729]: I0127 14:40:06.379842 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerDied","Data":"074e9cd7635bad89c24dc98ce10472fdb1437435842d8a3db4187f453baa064f"} Jan 27 14:40:06 crc kubenswrapper[4729]: I0127 14:40:06.989250 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047149 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047344 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047385 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qxsc\" (UniqueName: \"kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047428 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047573 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.047606 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle\") pod \"2c8103be-a039-4e55-88f3-9e75f2f123dc\" (UID: \"2c8103be-a039-4e55-88f3-9e75f2f123dc\") " Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.054950 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts" (OuterVolumeSpecName: "scripts") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.066296 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc" (OuterVolumeSpecName: "kube-api-access-7qxsc") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "kube-api-access-7qxsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.159065 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qxsc\" (UniqueName: \"kubernetes.io/projected/2c8103be-a039-4e55-88f3-9e75f2f123dc-kube-api-access-7qxsc\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.159110 4729 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.159272 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.217758 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.256986 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.262755 4729 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.262805 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.262824 4729 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.315712 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data" (OuterVolumeSpecName: "config-data") pod "2c8103be-a039-4e55-88f3-9e75f2f123dc" (UID: "2c8103be-a039-4e55-88f3-9e75f2f123dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.366284 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c8103be-a039-4e55-88f3-9e75f2f123dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.404655 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2c8103be-a039-4e55-88f3-9e75f2f123dc","Type":"ContainerDied","Data":"65dce45fac20859e8edcbfc5bc5c78c9556472f53f135a92d591e12b034e833e"} Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.404695 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.404740 4729 scope.go:117] "RemoveContainer" containerID="074e9cd7635bad89c24dc98ce10472fdb1437435842d8a3db4187f453baa064f" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.444465 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.462943 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.469417 4729 scope.go:117] "RemoveContainer" containerID="39b926b6eaa04d6f5bdef659507224f01935695b0ed457e08163eb269499d1ff" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.482747 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483409 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483432 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483456 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-evaluator" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483465 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-evaluator" Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483492 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-listener" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483500 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-listener" Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483517 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-notifier" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483534 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-notifier" Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483549 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-api" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483561 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-api" Jan 27 14:40:07 crc kubenswrapper[4729]: E0127 14:40:07.483585 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0aeb60-0855-486a-b23d-790410009406" containerName="aodh-db-sync" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.483592 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0aeb60-0855-486a-b23d-790410009406" containerName="aodh-db-sync" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484197 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-listener" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484228 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-evaluator" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484239 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-api" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484256 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dbee0b-07e8-420f-af0a-83c8d5ccf70f" containerName="heat-engine" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484276 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" containerName="aodh-notifier" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.484294 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0aeb60-0855-486a-b23d-790410009406" containerName="aodh-db-sync" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.489513 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.495141 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.495477 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.495696 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.496013 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-zvnk7" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.496553 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.500562 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.523829 4729 scope.go:117] "RemoveContainer" containerID="d8d24774e098daca70ccfe9ec0639e146e9ed676ba8f0562868f5d420f9fda94" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.558990 4729 scope.go:117] "RemoveContainer" containerID="386ea332bce9cc4275f68ebca243629ae7bac2dd75d3a167410fea880f3a046c" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572121 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6fl\" (UniqueName: \"kubernetes.io/projected/e0df588d-304c-41cb-b6bd-9d7d5987ebef-kube-api-access-wf6fl\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572232 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-scripts\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572307 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572464 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-config-data\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572566 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-public-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.572674 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-internal-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.674928 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-scripts\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.675010 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.675094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-config-data\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.675171 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-public-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.675250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-internal-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.675292 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6fl\" (UniqueName: \"kubernetes.io/projected/e0df588d-304c-41cb-b6bd-9d7d5987ebef-kube-api-access-wf6fl\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.680272 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.681387 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-internal-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.682000 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-public-tls-certs\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.682861 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-config-data\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.683356 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0df588d-304c-41cb-b6bd-9d7d5987ebef-scripts\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.693028 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6fl\" (UniqueName: \"kubernetes.io/projected/e0df588d-304c-41cb-b6bd-9d7d5987ebef-kube-api-access-wf6fl\") pod \"aodh-0\" (UID: \"e0df588d-304c-41cb-b6bd-9d7d5987ebef\") " pod="openstack/aodh-0" Jan 27 14:40:07 crc kubenswrapper[4729]: I0127 14:40:07.811969 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 27 14:40:08 crc kubenswrapper[4729]: I0127 14:40:08.077630 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c8103be-a039-4e55-88f3-9e75f2f123dc" path="/var/lib/kubelet/pods/2c8103be-a039-4e55-88f3-9e75f2f123dc/volumes" Jan 27 14:40:08 crc kubenswrapper[4729]: I0127 14:40:08.366627 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 27 14:40:08 crc kubenswrapper[4729]: W0127 14:40:08.369678 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0df588d_304c_41cb_b6bd_9d7d5987ebef.slice/crio-9e15e05e9439dedf10e473b18affbb8b20cf8065d13545a721377144f57dd45d WatchSource:0}: Error finding container 9e15e05e9439dedf10e473b18affbb8b20cf8065d13545a721377144f57dd45d: Status 404 returned error can't find the container with id 9e15e05e9439dedf10e473b18affbb8b20cf8065d13545a721377144f57dd45d Jan 27 14:40:08 crc kubenswrapper[4729]: I0127 14:40:08.418685 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e0df588d-304c-41cb-b6bd-9d7d5987ebef","Type":"ContainerStarted","Data":"9e15e05e9439dedf10e473b18affbb8b20cf8065d13545a721377144f57dd45d"} Jan 27 14:40:09 crc kubenswrapper[4729]: I0127 14:40:09.438073 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e0df588d-304c-41cb-b6bd-9d7d5987ebef","Type":"ContainerStarted","Data":"c2164beba18a0d7b679385261ec90345f5fa582d3000def401c68ffbf8927ef1"} Jan 27 14:40:12 crc kubenswrapper[4729]: I0127 14:40:12.470760 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e0df588d-304c-41cb-b6bd-9d7d5987ebef","Type":"ContainerStarted","Data":"cf5b9413c612c5a240b5e1cd190e5c02cb676494e4c440ef13e94181389a2c32"} Jan 27 14:40:12 crc kubenswrapper[4729]: I0127 14:40:12.626115 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 27 14:40:13 crc kubenswrapper[4729]: I0127 14:40:13.253181 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 27 14:40:13 crc kubenswrapper[4729]: I0127 14:40:13.313965 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:14 crc kubenswrapper[4729]: I0127 14:40:14.498178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e0df588d-304c-41cb-b6bd-9d7d5987ebef","Type":"ContainerStarted","Data":"8764754d4defc21faacf300a7f086cd07c35ef000c40e63f64fa0115427f6300"} Jan 27 14:40:14 crc kubenswrapper[4729]: I0127 14:40:14.509471 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q5jzg" podUID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerName="registry-server" probeResult="failure" output=< Jan 27 14:40:14 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:40:14 crc kubenswrapper[4729]: > Jan 27 14:40:16 crc kubenswrapper[4729]: I0127 14:40:16.526561 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e0df588d-304c-41cb-b6bd-9d7d5987ebef","Type":"ContainerStarted","Data":"dde4a856d42f36609c0b5d6d79e956173b5957046288f52d854d02385630e6b0"} Jan 27 14:40:16 crc kubenswrapper[4729]: I0127 14:40:16.554531 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.561693567 podStartE2EDuration="9.554510614s" podCreationTimestamp="2026-01-27 14:40:07 +0000 UTC" firstStartedPulling="2026-01-27 14:40:08.372796904 +0000 UTC m=+2094.956987908" lastFinishedPulling="2026-01-27 14:40:15.365613951 +0000 UTC m=+2101.949804955" observedRunningTime="2026-01-27 14:40:16.551531388 +0000 UTC m=+2103.135722422" watchObservedRunningTime="2026-01-27 14:40:16.554510614 +0000 UTC m=+2103.138701618" Jan 27 14:40:18 crc kubenswrapper[4729]: I0127 14:40:18.493565 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" containerID="cri-o://3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214" gracePeriod=604795 Jan 27 14:40:18 crc kubenswrapper[4729]: I0127 14:40:18.568124 4729 generic.go:334] "Generic (PLEG): container finished" podID="90776e5b-71cf-43d1-969b-16278afed3cf" containerID="dba80cf5335d43ae478771d9eaa736fc1d07b72430365757aba06938d4cd1e83" exitCode=0 Jan 27 14:40:18 crc kubenswrapper[4729]: I0127 14:40:18.568205 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" event={"ID":"90776e5b-71cf-43d1-969b-16278afed3cf","Type":"ContainerDied","Data":"dba80cf5335d43ae478771d9eaa736fc1d07b72430365757aba06938d4cd1e83"} Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.183238 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.243595 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.374098 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam\") pod \"90776e5b-71cf-43d1-969b-16278afed3cf\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.374170 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle\") pod \"90776e5b-71cf-43d1-969b-16278afed3cf\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.374236 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory\") pod \"90776e5b-71cf-43d1-969b-16278afed3cf\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.374576 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr8vl\" (UniqueName: \"kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl\") pod \"90776e5b-71cf-43d1-969b-16278afed3cf\" (UID: \"90776e5b-71cf-43d1-969b-16278afed3cf\") " Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.383690 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl" (OuterVolumeSpecName: "kube-api-access-lr8vl") pod "90776e5b-71cf-43d1-969b-16278afed3cf" (UID: "90776e5b-71cf-43d1-969b-16278afed3cf"). InnerVolumeSpecName "kube-api-access-lr8vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.383981 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "90776e5b-71cf-43d1-969b-16278afed3cf" (UID: "90776e5b-71cf-43d1-969b-16278afed3cf"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.414867 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90776e5b-71cf-43d1-969b-16278afed3cf" (UID: "90776e5b-71cf-43d1-969b-16278afed3cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.416411 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory" (OuterVolumeSpecName: "inventory") pod "90776e5b-71cf-43d1-969b-16278afed3cf" (UID: "90776e5b-71cf-43d1-969b-16278afed3cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.478588 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr8vl\" (UniqueName: \"kubernetes.io/projected/90776e5b-71cf-43d1-969b-16278afed3cf-kube-api-access-lr8vl\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.478634 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.478643 4729 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.478654 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90776e5b-71cf-43d1-969b-16278afed3cf-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.594359 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" event={"ID":"90776e5b-71cf-43d1-969b-16278afed3cf","Type":"ContainerDied","Data":"fd73f6142baf6b25e84851102fe66bdddbef1b3340c2e4b2000eef057c79753f"} Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.595017 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd73f6142baf6b25e84851102fe66bdddbef1b3340c2e4b2000eef057c79753f" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.594418 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.685983 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99"] Jan 27 14:40:20 crc kubenswrapper[4729]: E0127 14:40:20.686569 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90776e5b-71cf-43d1-969b-16278afed3cf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.686590 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="90776e5b-71cf-43d1-969b-16278afed3cf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.686851 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="90776e5b-71cf-43d1-969b-16278afed3cf" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.687746 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.689627 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.690246 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.691764 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.691962 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.706953 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99"] Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.785490 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdfh\" (UniqueName: \"kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.785836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.786079 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.888696 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.888926 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdfh\" (UniqueName: \"kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.889029 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.892388 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.896911 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:20 crc kubenswrapper[4729]: I0127 14:40:20.906703 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdfh\" (UniqueName: \"kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-dtn99\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:21 crc kubenswrapper[4729]: I0127 14:40:21.014106 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:21 crc kubenswrapper[4729]: I0127 14:40:21.915712 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99"] Jan 27 14:40:22 crc kubenswrapper[4729]: I0127 14:40:22.615409 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" event={"ID":"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc","Type":"ContainerStarted","Data":"92a1ac94ed433a13e107ed9c2d0592508e21ff471def64aedfe4597eb99fd9bd"} Jan 27 14:40:23 crc kubenswrapper[4729]: I0127 14:40:23.627089 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" event={"ID":"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc","Type":"ContainerStarted","Data":"464b821bd5d516542c987e46d396b22576a396d35dde8e25d68a46868633f06a"} Jan 27 14:40:23 crc kubenswrapper[4729]: I0127 14:40:23.656067 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" podStartSLOduration=2.926701333 podStartE2EDuration="3.656042176s" podCreationTimestamp="2026-01-27 14:40:20 +0000 UTC" firstStartedPulling="2026-01-27 14:40:21.937975424 +0000 UTC m=+2108.522166428" lastFinishedPulling="2026-01-27 14:40:22.667316267 +0000 UTC m=+2109.251507271" observedRunningTime="2026-01-27 14:40:23.641599625 +0000 UTC m=+2110.225790629" watchObservedRunningTime="2026-01-27 14:40:23.656042176 +0000 UTC m=+2110.240233200" Jan 27 14:40:24 crc kubenswrapper[4729]: I0127 14:40:24.489098 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q5jzg" podUID="9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb" containerName="registry-server" probeResult="failure" output=< Jan 27 14:40:24 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:40:24 crc kubenswrapper[4729]: > Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.325160 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.365390 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.365468 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dgz4\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.365495 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.365726 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.365761 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368179 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368240 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368449 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368489 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368634 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.368665 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info\") pod \"190a5200-58b1-4ada-ab5f-47543de0795e\" (UID: \"190a5200-58b1-4ada-ab5f-47543de0795e\") " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.380274 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.388921 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.394557 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.396989 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.405888 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.406811 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4" (OuterVolumeSpecName: "kube-api-access-4dgz4") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "kube-api-access-4dgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.408363 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info" (OuterVolumeSpecName: "pod-info") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.437227 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a" (OuterVolumeSpecName: "persistence") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.497990 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498025 4729 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/190a5200-58b1-4ada-ab5f-47543de0795e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498038 4729 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498049 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dgz4\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-kube-api-access-4dgz4\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498061 4729 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/190a5200-58b1-4ada-ab5f-47543de0795e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498069 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498100 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") on node \"crc\" " Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.498110 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.517031 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data" (OuterVolumeSpecName: "config-data") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.518169 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf" (OuterVolumeSpecName: "server-conf") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.548549 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.548848 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a") on node "crc" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.568002 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "190a5200-58b1-4ada-ab5f-47543de0795e" (UID: "190a5200-58b1-4ada-ab5f-47543de0795e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.600445 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/190a5200-58b1-4ada-ab5f-47543de0795e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.600693 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.600755 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.600812 4729 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/190a5200-58b1-4ada-ab5f-47543de0795e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.653660 4729 generic.go:334] "Generic (PLEG): container finished" podID="190a5200-58b1-4ada-ab5f-47543de0795e" containerID="3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214" exitCode=0 Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.653724 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerDied","Data":"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214"} Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.653751 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.653779 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"190a5200-58b1-4ada-ab5f-47543de0795e","Type":"ContainerDied","Data":"03c505022d9c0f4c9d0706cfa88bb39fa1f520eb68c15f12de5f98873885b1b9"} Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.653800 4729 scope.go:117] "RemoveContainer" containerID="3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.710359 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.716048 4729 scope.go:117] "RemoveContainer" containerID="2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.736479 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.756607 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:25 crc kubenswrapper[4729]: E0127 14:40:25.757225 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.757243 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" Jan 27 14:40:25 crc kubenswrapper[4729]: E0127 14:40:25.757257 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="setup-container" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.757262 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="setup-container" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.757501 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" containerName="rabbitmq" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.758867 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.781813 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.809137 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.809421 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-config-data\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.809535 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.809709 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9mvk\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-kube-api-access-z9mvk\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.809955 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.810063 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.810188 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.810508 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.810715 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.810846 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.811056 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.812686 4729 scope.go:117] "RemoveContainer" containerID="3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214" Jan 27 14:40:25 crc kubenswrapper[4729]: E0127 14:40:25.815511 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214\": container with ID starting with 3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214 not found: ID does not exist" containerID="3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.815553 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214"} err="failed to get container status \"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214\": rpc error: code = NotFound desc = could not find container \"3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214\": container with ID starting with 3803537e6c81207bc1f915bbeacf027d0a659aee15adca5badb4997309114214 not found: ID does not exist" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.815578 4729 scope.go:117] "RemoveContainer" containerID="2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742" Jan 27 14:40:25 crc kubenswrapper[4729]: E0127 14:40:25.819656 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742\": container with ID starting with 2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742 not found: ID does not exist" containerID="2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.819708 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742"} err="failed to get container status \"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742\": rpc error: code = NotFound desc = could not find container \"2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742\": container with ID starting with 2d918a535e8cf01053bc166c108df0a5941dec0402d5ac089c2d5a9139a21742 not found: ID does not exist" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.913934 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.914008 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.914044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.914269 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915094 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915171 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915270 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915335 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-config-data\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915366 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915451 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9mvk\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-kube-api-access-z9mvk\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915511 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.915660 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.917125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.919392 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-server-conf\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.919627 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.920107 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-config-data\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.920449 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.920802 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.921486 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-pod-info\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.922568 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.922609 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b9a4be69ff768d889fe3e43ff3b03318c02894e033bfc6af486c241199bf0c68/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.924576 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:25 crc kubenswrapper[4729]: I0127 14:40:25.934318 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9mvk\" (UniqueName: \"kubernetes.io/projected/0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1-kube-api-access-z9mvk\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.004410 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e68c1d70-5948-4199-8ac1-bf9861573d8a\") pod \"rabbitmq-server-1\" (UID: \"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1\") " pod="openstack/rabbitmq-server-1" Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.064295 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190a5200-58b1-4ada-ab5f-47543de0795e" path="/var/lib/kubelet/pods/190a5200-58b1-4ada-ab5f-47543de0795e/volumes" Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.100957 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.665992 4729 generic.go:334] "Generic (PLEG): container finished" podID="3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" containerID="464b821bd5d516542c987e46d396b22576a396d35dde8e25d68a46868633f06a" exitCode=0 Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.666216 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" event={"ID":"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc","Type":"ContainerDied","Data":"464b821bd5d516542c987e46d396b22576a396d35dde8e25d68a46868633f06a"} Jan 27 14:40:26 crc kubenswrapper[4729]: W0127 14:40:26.693835 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fdb2033_6ffe_46d3_8eaf_1e824a9a47c1.slice/crio-c2d212c7a39606f7c929ef18b2e365fc7e133f359d276d6682cfad53661bee2a WatchSource:0}: Error finding container c2d212c7a39606f7c929ef18b2e365fc7e133f359d276d6682cfad53661bee2a: Status 404 returned error can't find the container with id c2d212c7a39606f7c929ef18b2e365fc7e133f359d276d6682cfad53661bee2a Jan 27 14:40:26 crc kubenswrapper[4729]: I0127 14:40:26.694362 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 27 14:40:27 crc kubenswrapper[4729]: I0127 14:40:27.681027 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1","Type":"ContainerStarted","Data":"c2d212c7a39606f7c929ef18b2e365fc7e133f359d276d6682cfad53661bee2a"} Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.608073 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.691680 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqdfh\" (UniqueName: \"kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh\") pod \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.692624 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam\") pod \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.692753 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory\") pod \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\" (UID: \"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc\") " Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.694713 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1","Type":"ContainerStarted","Data":"4fec1905154fb7b2dd4b02bf29b069a5751bb0a777cfe43a83b4247d764e5632"} Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.696428 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" event={"ID":"3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc","Type":"ContainerDied","Data":"92a1ac94ed433a13e107ed9c2d0592508e21ff471def64aedfe4597eb99fd9bd"} Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.696543 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a1ac94ed433a13e107ed9c2d0592508e21ff471def64aedfe4597eb99fd9bd" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.696505 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-dtn99" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.696968 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh" (OuterVolumeSpecName: "kube-api-access-mqdfh") pod "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" (UID: "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc"). InnerVolumeSpecName "kube-api-access-mqdfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.726221 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory" (OuterVolumeSpecName: "inventory") pod "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" (UID: "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.741073 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" (UID: "3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.788783 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww"] Jan 27 14:40:28 crc kubenswrapper[4729]: E0127 14:40:28.789338 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.789351 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.789596 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.790436 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.796491 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqdfh\" (UniqueName: \"kubernetes.io/projected/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-kube-api-access-mqdfh\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.796528 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.796542 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.802752 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww"] Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.898914 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.899342 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.899441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvz79\" (UniqueName: \"kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:28 crc kubenswrapper[4729]: I0127 14:40:28.899475 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.001547 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.001744 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.002576 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvz79\" (UniqueName: \"kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.002694 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.005410 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.005481 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.007505 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.025522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvz79\" (UniqueName: \"kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.149751 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:40:29 crc kubenswrapper[4729]: I0127 14:40:29.738390 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww"] Jan 27 14:40:30 crc kubenswrapper[4729]: I0127 14:40:30.720592 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" event={"ID":"9795f0ec-6b8d-4470-bd63-584192019fcf","Type":"ContainerStarted","Data":"11b59732d28569d13fcc09a106af41d76604652a723f0ef0754ae9ab2eb68276"} Jan 27 14:40:31 crc kubenswrapper[4729]: I0127 14:40:31.742230 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" event={"ID":"9795f0ec-6b8d-4470-bd63-584192019fcf","Type":"ContainerStarted","Data":"d009b4bac31d3e8d37b2ee2a157d6364e0f6cb2c5059280a3d1617ef4b1b6b78"} Jan 27 14:40:31 crc kubenswrapper[4729]: I0127 14:40:31.776506 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" podStartSLOduration=2.97015707 podStartE2EDuration="3.776487182s" podCreationTimestamp="2026-01-27 14:40:28 +0000 UTC" firstStartedPulling="2026-01-27 14:40:29.728605863 +0000 UTC m=+2116.312796867" lastFinishedPulling="2026-01-27 14:40:30.534935955 +0000 UTC m=+2117.119126979" observedRunningTime="2026-01-27 14:40:31.767168892 +0000 UTC m=+2118.351359916" watchObservedRunningTime="2026-01-27 14:40:31.776487182 +0000 UTC m=+2118.360678196" Jan 27 14:40:33 crc kubenswrapper[4729]: I0127 14:40:33.483216 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:40:33 crc kubenswrapper[4729]: I0127 14:40:33.535899 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q5jzg" Jan 27 14:40:33 crc kubenswrapper[4729]: I0127 14:40:33.609994 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q5jzg"] Jan 27 14:40:33 crc kubenswrapper[4729]: I0127 14:40:33.812532 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:40:33 crc kubenswrapper[4729]: I0127 14:40:33.812947 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwd2l" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="registry-server" containerID="cri-o://6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c" gracePeriod=2 Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.522778 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.659348 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content\") pod \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.659714 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt6wb\" (UniqueName: \"kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb\") pod \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.659763 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities\") pod \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\" (UID: \"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e\") " Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.667420 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities" (OuterVolumeSpecName: "utilities") pod "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" (UID: "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.672606 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb" (OuterVolumeSpecName: "kube-api-access-bt6wb") pod "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" (UID: "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e"). InnerVolumeSpecName "kube-api-access-bt6wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.754338 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" (UID: "b5ceaa91-d5cc-4ac9-8351-ad4ef924678e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.763131 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.763173 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt6wb\" (UniqueName: \"kubernetes.io/projected/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-kube-api-access-bt6wb\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.763186 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.908118 4729 generic.go:334] "Generic (PLEG): container finished" podID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerID="6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c" exitCode=0 Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.908208 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwd2l" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.908212 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerDied","Data":"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c"} Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.908346 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwd2l" event={"ID":"b5ceaa91-d5cc-4ac9-8351-ad4ef924678e","Type":"ContainerDied","Data":"501537f3b0cb364765d9e36b99a7eb020cd5792d6af47c975ecc9cc0a3cd1841"} Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.908372 4729 scope.go:117] "RemoveContainer" containerID="6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.948325 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.948722 4729 scope.go:117] "RemoveContainer" containerID="21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7" Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.961916 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwd2l"] Jan 27 14:40:34 crc kubenswrapper[4729]: I0127 14:40:34.979093 4729 scope.go:117] "RemoveContainer" containerID="dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.042377 4729 scope.go:117] "RemoveContainer" containerID="6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c" Jan 27 14:40:35 crc kubenswrapper[4729]: E0127 14:40:35.042921 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c\": container with ID starting with 6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c not found: ID does not exist" containerID="6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.042964 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c"} err="failed to get container status \"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c\": rpc error: code = NotFound desc = could not find container \"6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c\": container with ID starting with 6052a428ec71469e5de49e3fef94621b3e86c925e6de1447d87e6b14f5c72b6c not found: ID does not exist" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.042991 4729 scope.go:117] "RemoveContainer" containerID="21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7" Jan 27 14:40:35 crc kubenswrapper[4729]: E0127 14:40:35.043482 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7\": container with ID starting with 21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7 not found: ID does not exist" containerID="21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.043521 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7"} err="failed to get container status \"21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7\": rpc error: code = NotFound desc = could not find container \"21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7\": container with ID starting with 21c6662de9a0e588cce152a6059276ba1c6b4bca436c391459f1a70c1774cac7 not found: ID does not exist" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.043549 4729 scope.go:117] "RemoveContainer" containerID="dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422" Jan 27 14:40:35 crc kubenswrapper[4729]: E0127 14:40:35.043822 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422\": container with ID starting with dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422 not found: ID does not exist" containerID="dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422" Jan 27 14:40:35 crc kubenswrapper[4729]: I0127 14:40:35.043868 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422"} err="failed to get container status \"dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422\": rpc error: code = NotFound desc = could not find container \"dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422\": container with ID starting with dbcf9f28367bc35860f377085dc063aebee4f1a545d9f530947366240bbaf422 not found: ID does not exist" Jan 27 14:40:36 crc kubenswrapper[4729]: I0127 14:40:36.062655 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" path="/var/lib/kubelet/pods/b5ceaa91-d5cc-4ac9-8351-ad4ef924678e/volumes" Jan 27 14:41:01 crc kubenswrapper[4729]: I0127 14:41:01.220313 4729 generic.go:334] "Generic (PLEG): container finished" podID="0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1" containerID="4fec1905154fb7b2dd4b02bf29b069a5751bb0a777cfe43a83b4247d764e5632" exitCode=0 Jan 27 14:41:01 crc kubenswrapper[4729]: I0127 14:41:01.220478 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1","Type":"ContainerDied","Data":"4fec1905154fb7b2dd4b02bf29b069a5751bb0a777cfe43a83b4247d764e5632"} Jan 27 14:41:02 crc kubenswrapper[4729]: I0127 14:41:02.236248 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1","Type":"ContainerStarted","Data":"77603bcd93c69b9fb1375069fac98f0ea5eb31b716dba11e3620a03a1e2d76c8"} Jan 27 14:41:02 crc kubenswrapper[4729]: I0127 14:41:02.237028 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 27 14:41:02 crc kubenswrapper[4729]: I0127 14:41:02.259702 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.259663669 podStartE2EDuration="37.259663669s" podCreationTimestamp="2026-01-27 14:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:41:02.259242128 +0000 UTC m=+2148.843433152" watchObservedRunningTime="2026-01-27 14:41:02.259663669 +0000 UTC m=+2148.843854683" Jan 27 14:41:04 crc kubenswrapper[4729]: I0127 14:41:04.077053 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lhgr5"] Jan 27 14:41:04 crc kubenswrapper[4729]: I0127 14:41:04.082927 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-14a7-account-create-update-q9mp6"] Jan 27 14:41:04 crc kubenswrapper[4729]: I0127 14:41:04.098751 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lhgr5"] Jan 27 14:41:04 crc kubenswrapper[4729]: I0127 14:41:04.116264 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-14a7-account-create-update-q9mp6"] Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.066940 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a37f0cf-1920-436e-b40d-ba267fb85828" path="/var/lib/kubelet/pods/7a37f0cf-1920-436e-b40d-ba267fb85828/volumes" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.071242 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78fb6de-dff5-4785-90ac-bfda868d9d12" path="/var/lib/kubelet/pods/e78fb6de-dff5-4785-90ac-bfda868d9d12/volumes" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.437691 4729 scope.go:117] "RemoveContainer" containerID="2447ecda9cd2fd75789f474b077981452246b1878e2ab3fdce9805758e83eb6f" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.474001 4729 scope.go:117] "RemoveContainer" containerID="1cf591707b8587dcc2a35be6612e08e49d29952c42ff3446d035a84b14a7e76c" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.541328 4729 scope.go:117] "RemoveContainer" containerID="793730f20c62a769cf31857f01a74079c290c34d6aec6ec0ed627da23c338e04" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.597734 4729 scope.go:117] "RemoveContainer" containerID="e4e442910ba0923662137915c2ceb1c4aa4a12f25271824c2ed01e9130803594" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.648380 4729 scope.go:117] "RemoveContainer" containerID="0c06215c5f758770f50428174ac0270a4d2cd81fe0f70dc84d40b54054566404" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.691354 4729 scope.go:117] "RemoveContainer" containerID="7301a756c69be98955bd51e6b9c16e80c898bf50f357fd7fbf6c64bec5ca08e2" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.745524 4729 scope.go:117] "RemoveContainer" containerID="9be07918cfb846752e460e6834b8e23657061bac59a7068f67832dba9247d706" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.835964 4729 scope.go:117] "RemoveContainer" containerID="c75feda8e12030e1f57fb2c1a0f76b1d48d08e554dccb9d3c838c90fa59dc782" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.857280 4729 scope.go:117] "RemoveContainer" containerID="22437fc2564b3fb71032a61c7d785899d431364ba912efc3cc173c497092a868" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.887053 4729 scope.go:117] "RemoveContainer" containerID="9d0a76d4af9821128fa2b7ebd1d044577dd2a7e4e2ea2cc7c1c3d6b99055045d" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.920211 4729 scope.go:117] "RemoveContainer" containerID="b6be0fcda697cfc1e6537b30c423615e44b8b583b0b5723cadcc4fabcb27fa4e" Jan 27 14:41:06 crc kubenswrapper[4729]: I0127 14:41:06.987418 4729 scope.go:117] "RemoveContainer" containerID="e278680905edc20897f5e3450ddec07a258ebeb1e0c3145525354855bb9092ba" Jan 27 14:41:07 crc kubenswrapper[4729]: I0127 14:41:07.028602 4729 scope.go:117] "RemoveContainer" containerID="f71c75a9a33bc14c274bf08de5bdd505fd2d45da05f180722618e0fdf2b2b196" Jan 27 14:41:07 crc kubenswrapper[4729]: I0127 14:41:07.070002 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nzfnl"] Jan 27 14:41:07 crc kubenswrapper[4729]: I0127 14:41:07.082597 4729 scope.go:117] "RemoveContainer" containerID="0ce7ecf61234150d6c951412f700d84da87e54a7444d3d99ad377fd084b677a5" Jan 27 14:41:07 crc kubenswrapper[4729]: I0127 14:41:07.089442 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nzfnl"] Jan 27 14:41:08 crc kubenswrapper[4729]: I0127 14:41:08.034438 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8d6c-account-create-update-2vb5h"] Jan 27 14:41:08 crc kubenswrapper[4729]: I0127 14:41:08.046923 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8d6c-account-create-update-2vb5h"] Jan 27 14:41:08 crc kubenswrapper[4729]: I0127 14:41:08.067744 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07fc7673-9046-4fee-a6f2-060f5566f405" path="/var/lib/kubelet/pods/07fc7673-9046-4fee-a6f2-060f5566f405/volumes" Jan 27 14:41:08 crc kubenswrapper[4729]: I0127 14:41:08.068666 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a65f219-5a7b-4c05-bf31-9faa4c7490a2" path="/var/lib/kubelet/pods/5a65f219-5a7b-4c05-bf31-9faa4c7490a2/volumes" Jan 27 14:41:09 crc kubenswrapper[4729]: I0127 14:41:09.035648 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lzl7v"] Jan 27 14:41:09 crc kubenswrapper[4729]: I0127 14:41:09.049962 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4dc2-account-create-update-bqd4c"] Jan 27 14:41:09 crc kubenswrapper[4729]: I0127 14:41:09.061393 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lzl7v"] Jan 27 14:41:09 crc kubenswrapper[4729]: I0127 14:41:09.072122 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4dc2-account-create-update-bqd4c"] Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.034069 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-d33f-account-create-update-jrz6t"] Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.046507 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-gt6kg"] Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.073509 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45aa9c0b-9db2-415e-9cc9-f552f9127f34" path="/var/lib/kubelet/pods/45aa9c0b-9db2-415e-9cc9-f552f9127f34/volumes" Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.074467 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6276002-6f52-4082-b413-767e4b80717a" path="/var/lib/kubelet/pods/b6276002-6f52-4082-b413-767e4b80717a/volumes" Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.075198 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-d33f-account-create-update-jrz6t"] Jan 27 14:41:10 crc kubenswrapper[4729]: I0127 14:41:10.075303 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-gt6kg"] Jan 27 14:41:12 crc kubenswrapper[4729]: I0127 14:41:12.067597 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4" path="/var/lib/kubelet/pods/3b7c01e0-0dfb-41c8-89f7-0eb5c38eeda4/volumes" Jan 27 14:41:12 crc kubenswrapper[4729]: I0127 14:41:12.069082 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc" path="/var/lib/kubelet/pods/3d8150fa-bcbf-4ea7-9ff1-56ab2cbfa5bc/volumes" Jan 27 14:41:15 crc kubenswrapper[4729]: I0127 14:41:15.031908 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-74m95"] Jan 27 14:41:15 crc kubenswrapper[4729]: I0127 14:41:15.044841 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-74m95"] Jan 27 14:41:16 crc kubenswrapper[4729]: I0127 14:41:16.064792 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caaaad51-76f2-49f0-8dc4-6ad513a50327" path="/var/lib/kubelet/pods/caaaad51-76f2-49f0-8dc4-6ad513a50327/volumes" Jan 27 14:41:16 crc kubenswrapper[4729]: I0127 14:41:16.104202 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 27 14:41:16 crc kubenswrapper[4729]: I0127 14:41:16.158079 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:20 crc kubenswrapper[4729]: I0127 14:41:20.993279 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" containerID="cri-o://2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f" gracePeriod=604796 Jan 27 14:41:22 crc kubenswrapper[4729]: I0127 14:41:22.655433 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:41:22 crc kubenswrapper[4729]: I0127 14:41:22.655792 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:41:25 crc kubenswrapper[4729]: I0127 14:41:25.036276 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pg87j"] Jan 27 14:41:25 crc kubenswrapper[4729]: I0127 14:41:25.049495 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-637a-account-create-update-hxzgl"] Jan 27 14:41:25 crc kubenswrapper[4729]: I0127 14:41:25.062840 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pg87j"] Jan 27 14:41:25 crc kubenswrapper[4729]: I0127 14:41:25.074476 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-637a-account-create-update-hxzgl"] Jan 27 14:41:26 crc kubenswrapper[4729]: I0127 14:41:26.064643 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c484e844-15ff-4e38-8b3d-95cdbbc29fdf" path="/var/lib/kubelet/pods/c484e844-15ff-4e38-8b3d-95cdbbc29fdf/volumes" Jan 27 14:41:26 crc kubenswrapper[4729]: I0127 14:41:26.065427 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6072b2-cd99-4198-ac75-97a21048aaa9" path="/var/lib/kubelet/pods/eb6072b2-cd99-4198-ac75-97a21048aaa9/volumes" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.773743 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.794697 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9f7w\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.794766 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.794825 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.795006 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.795041 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.795086 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.795162 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.796344 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.796461 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.800107 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.800201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.800293 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.800321 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd\") pod \"b490b2c5-e772-48d2-a2cc-582bda8b019e\" (UID: \"b490b2c5-e772-48d2-a2cc-582bda8b019e\") " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.801402 4729 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.801425 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.804842 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.805934 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.810450 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.824006 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info" (OuterVolumeSpecName: "pod-info") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.834463 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w" (OuterVolumeSpecName: "kube-api-access-m9f7w") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "kube-api-access-m9f7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.890518 4729 generic.go:334] "Generic (PLEG): container finished" podID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerID="2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f" exitCode=0 Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.890665 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.890686 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerDied","Data":"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f"} Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.891083 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b490b2c5-e772-48d2-a2cc-582bda8b019e","Type":"ContainerDied","Data":"5d25bc2430dee39a4493b24d644094ccf4647d046320b160a8841638589820ea"} Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.891120 4729 scope.go:117] "RemoveContainer" containerID="2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.894033 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5" (OuterVolumeSpecName: "persistence") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906618 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") on node \"crc\" " Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906685 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906699 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9f7w\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-kube-api-access-m9f7w\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906713 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906725 4729 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b490b2c5-e772-48d2-a2cc-582bda8b019e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.906736 4729 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b490b2c5-e772-48d2-a2cc-582bda8b019e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.953670 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data" (OuterVolumeSpecName: "config-data") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.967465 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf" (OuterVolumeSpecName: "server-conf") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.997746 4729 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 27 14:41:27 crc kubenswrapper[4729]: I0127 14:41:27.998008 4729 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5") on node "crc" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.019078 4729 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.019136 4729 reconciler_common.go:293] "Volume detached for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.019151 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b490b2c5-e772-48d2-a2cc-582bda8b019e-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.026653 4729 scope.go:117] "RemoveContainer" containerID="d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.072961 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b490b2c5-e772-48d2-a2cc-582bda8b019e" (UID: "b490b2c5-e772-48d2-a2cc-582bda8b019e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.091181 4729 scope.go:117] "RemoveContainer" containerID="2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.091942 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f\": container with ID starting with 2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f not found: ID does not exist" containerID="2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.092056 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f"} err="failed to get container status \"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f\": rpc error: code = NotFound desc = could not find container \"2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f\": container with ID starting with 2700bbee03b30cdb84a4a65e5088793989d04cfc7b0b53e149676bc3407f435f not found: ID does not exist" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.092105 4729 scope.go:117] "RemoveContainer" containerID="d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.092633 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e\": container with ID starting with d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e not found: ID does not exist" containerID="d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.092691 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e"} err="failed to get container status \"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e\": rpc error: code = NotFound desc = could not find container \"d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e\": container with ID starting with d4fbee281bb5623a5ac86e1e89f9c658ae229951aeb7639810fabaf0b1105d9e not found: ID does not exist" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.126859 4729 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b490b2c5-e772-48d2-a2cc-582bda8b019e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.239979 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.257238 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.290350 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.290959 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.290982 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.291048 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="setup-container" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291059 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="setup-container" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.291081 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="extract-content" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291089 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="extract-content" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.291098 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="registry-server" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291108 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="registry-server" Jan 27 14:41:28 crc kubenswrapper[4729]: E0127 14:41:28.291130 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="extract-utilities" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291139 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="extract-utilities" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291383 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ceaa91-d5cc-4ac9-8351-ad4ef924678e" containerName="registry-server" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.291426 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" containerName="rabbitmq" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.292969 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.317479 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438074 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438184 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7g7n\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-kube-api-access-n7g7n\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438285 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438368 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438458 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438500 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438651 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a360fb27-d7b1-4d42-9889-f47c87012e2e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438691 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.438714 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.439038 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.439129 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a360fb27-d7b1-4d42-9889-f47c87012e2e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.541695 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a360fb27-d7b1-4d42-9889-f47c87012e2e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542053 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542086 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542134 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542163 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a360fb27-d7b1-4d42-9889-f47c87012e2e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542214 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542293 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7g7n\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-kube-api-access-n7g7n\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542380 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542462 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.542587 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.544017 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.544760 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.547480 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.548404 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.549101 4729 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.549137 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/49b1a718696a83970344de30fbccc78f0978419d3cdeeb51ab24f883887fa19b/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.550493 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a360fb27-d7b1-4d42-9889-f47c87012e2e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.551014 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.551840 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a360fb27-d7b1-4d42-9889-f47c87012e2e-config-data\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.553438 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a360fb27-d7b1-4d42-9889-f47c87012e2e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.556366 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a360fb27-d7b1-4d42-9889-f47c87012e2e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.568023 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7g7n\" (UniqueName: \"kubernetes.io/projected/a360fb27-d7b1-4d42-9889-f47c87012e2e-kube-api-access-n7g7n\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.632122 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6a9f48dc-31ca-4501-bbd5-f1b744f37ab5\") pod \"rabbitmq-server-0\" (UID: \"a360fb27-d7b1-4d42-9889-f47c87012e2e\") " pod="openstack/rabbitmq-server-0" Jan 27 14:41:28 crc kubenswrapper[4729]: I0127 14:41:28.640495 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 27 14:41:29 crc kubenswrapper[4729]: I0127 14:41:29.183817 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 27 14:41:29 crc kubenswrapper[4729]: I0127 14:41:29.924724 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a360fb27-d7b1-4d42-9889-f47c87012e2e","Type":"ContainerStarted","Data":"5a17a208d63911b42144c4d5580f5ae5f552d25e1d2eee2e338e5ed2bb90db69"} Jan 27 14:41:30 crc kubenswrapper[4729]: I0127 14:41:30.066141 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b490b2c5-e772-48d2-a2cc-582bda8b019e" path="/var/lib/kubelet/pods/b490b2c5-e772-48d2-a2cc-582bda8b019e/volumes" Jan 27 14:41:31 crc kubenswrapper[4729]: I0127 14:41:31.950749 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a360fb27-d7b1-4d42-9889-f47c87012e2e","Type":"ContainerStarted","Data":"88550a120bc397b5b03a120db4b3e1fb342268f5debc880100b51389810dbcfc"} Jan 27 14:41:52 crc kubenswrapper[4729]: I0127 14:41:52.655581 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:41:52 crc kubenswrapper[4729]: I0127 14:41:52.656230 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:04 crc kubenswrapper[4729]: I0127 14:42:04.302982 4729 generic.go:334] "Generic (PLEG): container finished" podID="a360fb27-d7b1-4d42-9889-f47c87012e2e" containerID="88550a120bc397b5b03a120db4b3e1fb342268f5debc880100b51389810dbcfc" exitCode=0 Jan 27 14:42:04 crc kubenswrapper[4729]: I0127 14:42:04.303101 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a360fb27-d7b1-4d42-9889-f47c87012e2e","Type":"ContainerDied","Data":"88550a120bc397b5b03a120db4b3e1fb342268f5debc880100b51389810dbcfc"} Jan 27 14:42:05 crc kubenswrapper[4729]: I0127 14:42:05.316779 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a360fb27-d7b1-4d42-9889-f47c87012e2e","Type":"ContainerStarted","Data":"092eecebb3a2f2fe8f94c5c82665164f99eff3f99cbc3f8560e2a3ba3ebb61cf"} Jan 27 14:42:05 crc kubenswrapper[4729]: I0127 14:42:05.317245 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 27 14:42:05 crc kubenswrapper[4729]: I0127 14:42:05.347844 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.347827508 podStartE2EDuration="37.347827508s" podCreationTimestamp="2026-01-27 14:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:42:05.338727354 +0000 UTC m=+2211.922918378" watchObservedRunningTime="2026-01-27 14:42:05.347827508 +0000 UTC m=+2211.932018512" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.467981 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.470677 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.489820 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.511335 4729 scope.go:117] "RemoveContainer" containerID="aa65169390446ab20256a54d4a230ed0d35b4a5631f18985f4de82504c0a4b1e" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.547131 4729 scope.go:117] "RemoveContainer" containerID="1fdde86359780afeb3f8c99fa5cec1d954117e593f40316d1ba30d9ce6e8ae32" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.610192 4729 scope.go:117] "RemoveContainer" containerID="eb472908af5e413325381e0b0e7e45313ff7873ed1bc536bcbecde504b6f4850" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.622047 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szt5m\" (UniqueName: \"kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.622289 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.622356 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.671157 4729 scope.go:117] "RemoveContainer" containerID="c4e5b0a48fe01056a742d1756873e73978d307cff5ae9a0fa40fed45e085a408" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.724765 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.724842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szt5m\" (UniqueName: \"kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.725108 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.725690 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.726021 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.729331 4729 scope.go:117] "RemoveContainer" containerID="ee1db53742fe975639492ff9008734974b4a4e60fecbd41d9c24c9a268e5e02c" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.744711 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szt5m\" (UniqueName: \"kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m\") pod \"redhat-operators-2ljc6\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.800284 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.848616 4729 scope.go:117] "RemoveContainer" containerID="52be79555f5a759f3b188732504cdd53df1eff0aa0dd772102fb8d92534cef46" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.872652 4729 scope.go:117] "RemoveContainer" containerID="477a244569fc9505ad6844c4dfd812bdb6c5eb8d3a7ff9e9cfe829e1a48cef68" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.910288 4729 scope.go:117] "RemoveContainer" containerID="6ff7730e5c08394546af83857c301aadb27cd591cc604c90f8e9a8577b25a5ba" Jan 27 14:42:07 crc kubenswrapper[4729]: I0127 14:42:07.946025 4729 scope.go:117] "RemoveContainer" containerID="db76f0122c0e55869d6636f8930ab13d354c173c565d70f3e568b967a12729e2" Jan 27 14:42:08 crc kubenswrapper[4729]: W0127 14:42:08.322751 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd806e7f_6ef0_410d_910d_69af93fe53e2.slice/crio-0c4f9ca9cc7a50106d6af87e9ee610f3b53eb19820e6d02c8b2e2945bba3deb4 WatchSource:0}: Error finding container 0c4f9ca9cc7a50106d6af87e9ee610f3b53eb19820e6d02c8b2e2945bba3deb4: Status 404 returned error can't find the container with id 0c4f9ca9cc7a50106d6af87e9ee610f3b53eb19820e6d02c8b2e2945bba3deb4 Jan 27 14:42:08 crc kubenswrapper[4729]: I0127 14:42:08.323296 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:08 crc kubenswrapper[4729]: I0127 14:42:08.356199 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerStarted","Data":"0c4f9ca9cc7a50106d6af87e9ee610f3b53eb19820e6d02c8b2e2945bba3deb4"} Jan 27 14:42:09 crc kubenswrapper[4729]: I0127 14:42:09.369343 4729 generic.go:334] "Generic (PLEG): container finished" podID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerID="97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38" exitCode=0 Jan 27 14:42:09 crc kubenswrapper[4729]: I0127 14:42:09.369460 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerDied","Data":"97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38"} Jan 27 14:42:11 crc kubenswrapper[4729]: I0127 14:42:11.403657 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerStarted","Data":"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880"} Jan 27 14:42:18 crc kubenswrapper[4729]: I0127 14:42:18.644457 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 27 14:42:19 crc kubenswrapper[4729]: I0127 14:42:19.487034 4729 generic.go:334] "Generic (PLEG): container finished" podID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerID="3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880" exitCode=0 Jan 27 14:42:19 crc kubenswrapper[4729]: I0127 14:42:19.487084 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerDied","Data":"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880"} Jan 27 14:42:20 crc kubenswrapper[4729]: I0127 14:42:20.524295 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerStarted","Data":"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8"} Jan 27 14:42:20 crc kubenswrapper[4729]: I0127 14:42:20.560839 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2ljc6" podStartSLOduration=3.008877014 podStartE2EDuration="13.560818118s" podCreationTimestamp="2026-01-27 14:42:07 +0000 UTC" firstStartedPulling="2026-01-27 14:42:09.371802811 +0000 UTC m=+2215.955993815" lastFinishedPulling="2026-01-27 14:42:19.923743915 +0000 UTC m=+2226.507934919" observedRunningTime="2026-01-27 14:42:20.547318346 +0000 UTC m=+2227.131509370" watchObservedRunningTime="2026-01-27 14:42:20.560818118 +0000 UTC m=+2227.145009142" Jan 27 14:42:22 crc kubenswrapper[4729]: I0127 14:42:22.655363 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:42:22 crc kubenswrapper[4729]: I0127 14:42:22.655771 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:42:22 crc kubenswrapper[4729]: I0127 14:42:22.655829 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:42:22 crc kubenswrapper[4729]: I0127 14:42:22.656934 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:42:22 crc kubenswrapper[4729]: I0127 14:42:22.657015 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" gracePeriod=600 Jan 27 14:42:22 crc kubenswrapper[4729]: E0127 14:42:22.791471 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:42:23 crc kubenswrapper[4729]: I0127 14:42:23.559709 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" exitCode=0 Jan 27 14:42:23 crc kubenswrapper[4729]: I0127 14:42:23.559809 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42"} Jan 27 14:42:23 crc kubenswrapper[4729]: I0127 14:42:23.560111 4729 scope.go:117] "RemoveContainer" containerID="2b06dc6218d9c2c78f700138490d117911b33585502602acff7be6e8841ff698" Jan 27 14:42:23 crc kubenswrapper[4729]: I0127 14:42:23.560971 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:42:23 crc kubenswrapper[4729]: E0127 14:42:23.561433 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:42:26 crc kubenswrapper[4729]: I0127 14:42:26.050968 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-42bph"] Jan 27 14:42:26 crc kubenswrapper[4729]: I0127 14:42:26.065449 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-42bph"] Jan 27 14:42:27 crc kubenswrapper[4729]: I0127 14:42:27.801556 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:27 crc kubenswrapper[4729]: I0127 14:42:27.802162 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:27 crc kubenswrapper[4729]: I0127 14:42:27.863323 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:28 crc kubenswrapper[4729]: I0127 14:42:28.065583 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a98b729-f749-4536-a5eb-671c63734bcd" path="/var/lib/kubelet/pods/1a98b729-f749-4536-a5eb-671c63734bcd/volumes" Jan 27 14:42:28 crc kubenswrapper[4729]: I0127 14:42:28.666853 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:28 crc kubenswrapper[4729]: I0127 14:42:28.735078 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.044962 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lrspj"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.070705 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lqmdc"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.070765 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1cc2-account-create-update-7k7vv"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.081993 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lqmdc"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.091670 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lrspj"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.103805 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1cc2-account-create-update-7k7vv"] Jan 27 14:42:30 crc kubenswrapper[4729]: I0127 14:42:30.636672 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2ljc6" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="registry-server" containerID="cri-o://11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8" gracePeriod=2 Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.051525 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-tsf6d"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.065515 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8a3f-account-create-update-dqmrt"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.078335 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-tsf6d"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.089640 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8a3f-account-create-update-dqmrt"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.261444 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.340039 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities\") pod \"bd806e7f-6ef0-410d-910d-69af93fe53e2\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.340289 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szt5m\" (UniqueName: \"kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m\") pod \"bd806e7f-6ef0-410d-910d-69af93fe53e2\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.340498 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content\") pod \"bd806e7f-6ef0-410d-910d-69af93fe53e2\" (UID: \"bd806e7f-6ef0-410d-910d-69af93fe53e2\") " Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.341025 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities" (OuterVolumeSpecName: "utilities") pod "bd806e7f-6ef0-410d-910d-69af93fe53e2" (UID: "bd806e7f-6ef0-410d-910d-69af93fe53e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.341238 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.346378 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m" (OuterVolumeSpecName: "kube-api-access-szt5m") pod "bd806e7f-6ef0-410d-910d-69af93fe53e2" (UID: "bd806e7f-6ef0-410d-910d-69af93fe53e2"). InnerVolumeSpecName "kube-api-access-szt5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.444125 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szt5m\" (UniqueName: \"kubernetes.io/projected/bd806e7f-6ef0-410d-910d-69af93fe53e2-kube-api-access-szt5m\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.481368 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd806e7f-6ef0-410d-910d-69af93fe53e2" (UID: "bd806e7f-6ef0-410d-910d-69af93fe53e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.547605 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd806e7f-6ef0-410d-910d-69af93fe53e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.650071 4729 generic.go:334] "Generic (PLEG): container finished" podID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerID="11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8" exitCode=0 Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.650113 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerDied","Data":"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8"} Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.650140 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2ljc6" event={"ID":"bd806e7f-6ef0-410d-910d-69af93fe53e2","Type":"ContainerDied","Data":"0c4f9ca9cc7a50106d6af87e9ee610f3b53eb19820e6d02c8b2e2945bba3deb4"} Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.650157 4729 scope.go:117] "RemoveContainer" containerID="11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.650275 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2ljc6" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.687804 4729 scope.go:117] "RemoveContainer" containerID="3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.705456 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.718535 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2ljc6"] Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.724718 4729 scope.go:117] "RemoveContainer" containerID="97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.773300 4729 scope.go:117] "RemoveContainer" containerID="11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8" Jan 27 14:42:31 crc kubenswrapper[4729]: E0127 14:42:31.773669 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8\": container with ID starting with 11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8 not found: ID does not exist" containerID="11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.773699 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8"} err="failed to get container status \"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8\": rpc error: code = NotFound desc = could not find container \"11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8\": container with ID starting with 11dfe4d6a915d3c7e0e752e9209b3d74f61e06e2232f291444b3ba07f5a11ef8 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.773720 4729 scope.go:117] "RemoveContainer" containerID="3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880" Jan 27 14:42:31 crc kubenswrapper[4729]: E0127 14:42:31.774779 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880\": container with ID starting with 3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880 not found: ID does not exist" containerID="3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.774938 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880"} err="failed to get container status \"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880\": rpc error: code = NotFound desc = could not find container \"3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880\": container with ID starting with 3b7c7b05fb0cda0bbba04562b0e43bba571216bbf371a4f1c72f35725be24880 not found: ID does not exist" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.775059 4729 scope.go:117] "RemoveContainer" containerID="97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38" Jan 27 14:42:31 crc kubenswrapper[4729]: E0127 14:42:31.775542 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38\": container with ID starting with 97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38 not found: ID does not exist" containerID="97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38" Jan 27 14:42:31 crc kubenswrapper[4729]: I0127 14:42:31.775568 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38"} err="failed to get container status \"97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38\": rpc error: code = NotFound desc = could not find container \"97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38\": container with ID starting with 97b26f50a020603a8830274ddb5d38b0da1f8b439ede532f8af76e3cf2311e38 not found: ID does not exist" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.036341 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-7565-account-create-update-n45zb"] Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.048887 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-55c3-account-create-update-b2gdg"] Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.064138 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257aa9e6-20b3-41bf-b4e7-d2f60bd884a7" path="/var/lib/kubelet/pods/257aa9e6-20b3-41bf-b4e7-d2f60bd884a7/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.066235 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9cb86fa-4640-4a25-a97e-0212029e2d54" path="/var/lib/kubelet/pods/a9cb86fa-4640-4a25-a97e-0212029e2d54/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.067462 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" path="/var/lib/kubelet/pods/bd806e7f-6ef0-410d-910d-69af93fe53e2/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.068689 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db170cee-4747-486f-9e1c-ed91b358127c" path="/var/lib/kubelet/pods/db170cee-4747-486f-9e1c-ed91b358127c/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.069652 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec91797f-3812-4509-b9c1-dbc72bb4576c" path="/var/lib/kubelet/pods/ec91797f-3812-4509-b9c1-dbc72bb4576c/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.070352 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f824a03c-8320-4c09-83ab-7bf997460ad5" path="/var/lib/kubelet/pods/f824a03c-8320-4c09-83ab-7bf997460ad5/volumes" Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.071071 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-7565-account-create-update-n45zb"] Jan 27 14:42:32 crc kubenswrapper[4729]: I0127 14:42:32.072092 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-55c3-account-create-update-b2gdg"] Jan 27 14:42:34 crc kubenswrapper[4729]: I0127 14:42:34.064753 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59aa2e6-ef2f-459c-9db7-0765405fe2e7" path="/var/lib/kubelet/pods/c59aa2e6-ef2f-459c-9db7-0765405fe2e7/volumes" Jan 27 14:42:34 crc kubenswrapper[4729]: I0127 14:42:34.066750 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44c6dc1-a783-4680-aa97-c68f4c2a435e" path="/var/lib/kubelet/pods/d44c6dc1-a783-4680-aa97-c68f4c2a435e/volumes" Jan 27 14:42:35 crc kubenswrapper[4729]: I0127 14:42:35.051489 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:42:35 crc kubenswrapper[4729]: E0127 14:42:35.052073 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:42:48 crc kubenswrapper[4729]: I0127 14:42:48.052099 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:42:48 crc kubenswrapper[4729]: E0127 14:42:48.052900 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:42:59 crc kubenswrapper[4729]: I0127 14:42:59.051548 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:42:59 crc kubenswrapper[4729]: E0127 14:42:59.052418 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.233933 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-86mvc"] Jan 27 14:43:03 crc kubenswrapper[4729]: E0127 14:43:03.235184 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="extract-utilities" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.235202 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="extract-utilities" Jan 27 14:43:03 crc kubenswrapper[4729]: E0127 14:43:03.235225 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="extract-content" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.235232 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="extract-content" Jan 27 14:43:03 crc kubenswrapper[4729]: E0127 14:43:03.235254 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="registry-server" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.235261 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="registry-server" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.235502 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd806e7f-6ef0-410d-910d-69af93fe53e2" containerName="registry-server" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.238060 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.257401 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86mvc"] Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.343669 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-catalog-content\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.343811 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9ckf\" (UniqueName: \"kubernetes.io/projected/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-kube-api-access-q9ckf\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.343980 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-utilities\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.446887 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-catalog-content\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.446995 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9ckf\" (UniqueName: \"kubernetes.io/projected/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-kube-api-access-q9ckf\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.447112 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-utilities\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.447625 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-catalog-content\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.447657 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-utilities\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.468055 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9ckf\" (UniqueName: \"kubernetes.io/projected/3934c5e8-bbe9-4ce9-84da-61ee1f3e968a-kube-api-access-q9ckf\") pod \"community-operators-86mvc\" (UID: \"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a\") " pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:03 crc kubenswrapper[4729]: I0127 14:43:03.577111 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:04 crc kubenswrapper[4729]: I0127 14:43:04.070065 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86mvc"] Jan 27 14:43:04 crc kubenswrapper[4729]: I0127 14:43:04.123226 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86mvc" event={"ID":"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a","Type":"ContainerStarted","Data":"30d3529fe378fa2d3188438dadae0a2df5d2fa728407a3dd9c2ba4a20074afbd"} Jan 27 14:43:05 crc kubenswrapper[4729]: I0127 14:43:05.081597 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kgdx4"] Jan 27 14:43:05 crc kubenswrapper[4729]: I0127 14:43:05.096068 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kgdx4"] Jan 27 14:43:05 crc kubenswrapper[4729]: I0127 14:43:05.144611 4729 generic.go:334] "Generic (PLEG): container finished" podID="3934c5e8-bbe9-4ce9-84da-61ee1f3e968a" containerID="c14c18417d08082d0bca127ff5000bb06777f7d6a20e3ad6727392c4059a90b5" exitCode=0 Jan 27 14:43:05 crc kubenswrapper[4729]: I0127 14:43:05.144657 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86mvc" event={"ID":"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a","Type":"ContainerDied","Data":"c14c18417d08082d0bca127ff5000bb06777f7d6a20e3ad6727392c4059a90b5"} Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.068370 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35b9d6e-1e57-4e7d-812c-8e662e652759" path="/var/lib/kubelet/pods/f35b9d6e-1e57-4e7d-812c-8e662e652759/volumes" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.435570 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.438163 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.451736 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.623672 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.624278 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86br7\" (UniqueName: \"kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.624486 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.728302 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86br7\" (UniqueName: \"kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.728521 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.729127 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.729379 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.729666 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.759448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86br7\" (UniqueName: \"kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7\") pod \"redhat-marketplace-5q2xt\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:06 crc kubenswrapper[4729]: I0127 14:43:06.774145 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:07 crc kubenswrapper[4729]: I0127 14:43:07.310126 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:08 crc kubenswrapper[4729]: I0127 14:43:08.195335 4729 generic.go:334] "Generic (PLEG): container finished" podID="3051994b-1642-4725-9936-14f141ea6ed9" containerID="e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22" exitCode=0 Jan 27 14:43:08 crc kubenswrapper[4729]: I0127 14:43:08.195737 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerDied","Data":"e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22"} Jan 27 14:43:08 crc kubenswrapper[4729]: I0127 14:43:08.195770 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerStarted","Data":"418d0a21d1a147824e85ddf3e0fb351ba00d7fc42e055229bf4104315fe75e8c"} Jan 27 14:43:08 crc kubenswrapper[4729]: I0127 14:43:08.241459 4729 scope.go:117] "RemoveContainer" containerID="b5f26b984487f9d2eb089ff7da8d8e7f0ef34ac85b2c821249ae26b6279e89f3" Jan 27 14:43:09 crc kubenswrapper[4729]: I0127 14:43:09.973141 4729 scope.go:117] "RemoveContainer" containerID="11cbfeba36e2b3ae1ac65c78e0c35c7f7f1b447e37625e8d2a7d7e6665775f07" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.038058 4729 scope.go:117] "RemoveContainer" containerID="8f0c0bbf4e9b73ceb56c61f98c497ddd247df45472d5740c9f12e1a1843c916b" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.093003 4729 scope.go:117] "RemoveContainer" containerID="0f3cbb428e3b53f2e99adaf0648d89ac6ae573b0b428ce0c66abf241ff0c574b" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.141201 4729 scope.go:117] "RemoveContainer" containerID="a9dd6c52c2f95e7befbd51526aac2fd038f939db117258dbeecde1c3ab9990d2" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.307741 4729 scope.go:117] "RemoveContainer" containerID="d08b1c16d070f6241a6ef717ac54d556587ae73ad61a485bc42e64857531401b" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.382408 4729 scope.go:117] "RemoveContainer" containerID="4784a7689432186af9cd924a663aa730686434b7f0abd5d7f2d1fdb0124aebcb" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.410099 4729 scope.go:117] "RemoveContainer" containerID="c67e543d704ed34a37a0c05b3dc7747a191cb4be108360a325ce61feaac4c636" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.441303 4729 scope.go:117] "RemoveContainer" containerID="73abc768f98887d6be8f8b3063413ecd1af962851f5d41e6c2b2d92e0e01dcf2" Jan 27 14:43:10 crc kubenswrapper[4729]: I0127 14:43:10.465204 4729 scope.go:117] "RemoveContainer" containerID="113c99986a264b5e8c71e34ddc0c4ca8f433e3e6850fb3288188b1faadc95ad6" Jan 27 14:43:11 crc kubenswrapper[4729]: I0127 14:43:11.251681 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerStarted","Data":"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6"} Jan 27 14:43:11 crc kubenswrapper[4729]: I0127 14:43:11.253913 4729 generic.go:334] "Generic (PLEG): container finished" podID="3934c5e8-bbe9-4ce9-84da-61ee1f3e968a" containerID="ac8c4b6514e71c256b77ba29b52aa6d17f1f49410c4baa5a0c42cdd6bf92ff78" exitCode=0 Jan 27 14:43:11 crc kubenswrapper[4729]: I0127 14:43:11.253964 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86mvc" event={"ID":"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a","Type":"ContainerDied","Data":"ac8c4b6514e71c256b77ba29b52aa6d17f1f49410c4baa5a0c42cdd6bf92ff78"} Jan 27 14:43:12 crc kubenswrapper[4729]: I0127 14:43:12.051045 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:43:12 crc kubenswrapper[4729]: E0127 14:43:12.051636 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:43:16 crc kubenswrapper[4729]: I0127 14:43:16.333103 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86mvc" event={"ID":"3934c5e8-bbe9-4ce9-84da-61ee1f3e968a","Type":"ContainerStarted","Data":"564ef6e6f68066184bd6fbf1ec0b2d3ddaed5262ba6f073bfb635e12a86570a1"} Jan 27 14:43:16 crc kubenswrapper[4729]: I0127 14:43:16.357791 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-86mvc" podStartSLOduration=3.258847669 podStartE2EDuration="13.357769521s" podCreationTimestamp="2026-01-27 14:43:03 +0000 UTC" firstStartedPulling="2026-01-27 14:43:05.148367287 +0000 UTC m=+2271.732558281" lastFinishedPulling="2026-01-27 14:43:15.247289119 +0000 UTC m=+2281.831480133" observedRunningTime="2026-01-27 14:43:16.350623388 +0000 UTC m=+2282.934814402" watchObservedRunningTime="2026-01-27 14:43:16.357769521 +0000 UTC m=+2282.941960525" Jan 27 14:43:18 crc kubenswrapper[4729]: I0127 14:43:18.356686 4729 generic.go:334] "Generic (PLEG): container finished" podID="3051994b-1642-4725-9936-14f141ea6ed9" containerID="5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6" exitCode=0 Jan 27 14:43:18 crc kubenswrapper[4729]: I0127 14:43:18.356742 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerDied","Data":"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6"} Jan 27 14:43:19 crc kubenswrapper[4729]: I0127 14:43:19.389172 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerStarted","Data":"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39"} Jan 27 14:43:19 crc kubenswrapper[4729]: I0127 14:43:19.417228 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5q2xt" podStartSLOduration=4.502094617 podStartE2EDuration="13.417205914s" podCreationTimestamp="2026-01-27 14:43:06 +0000 UTC" firstStartedPulling="2026-01-27 14:43:09.916834302 +0000 UTC m=+2276.501025306" lastFinishedPulling="2026-01-27 14:43:18.831945599 +0000 UTC m=+2285.416136603" observedRunningTime="2026-01-27 14:43:19.406773674 +0000 UTC m=+2285.990964678" watchObservedRunningTime="2026-01-27 14:43:19.417205914 +0000 UTC m=+2286.001396928" Jan 27 14:43:23 crc kubenswrapper[4729]: I0127 14:43:23.578139 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:23 crc kubenswrapper[4729]: I0127 14:43:23.578608 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:23 crc kubenswrapper[4729]: I0127 14:43:23.630973 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:24 crc kubenswrapper[4729]: I0127 14:43:24.489821 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-86mvc" Jan 27 14:43:24 crc kubenswrapper[4729]: I0127 14:43:24.560893 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86mvc"] Jan 27 14:43:24 crc kubenswrapper[4729]: I0127 14:43:24.607685 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:43:24 crc kubenswrapper[4729]: I0127 14:43:24.607932 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bqcfg" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="registry-server" containerID="cri-o://55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da" gracePeriod=2 Jan 27 14:43:25 crc kubenswrapper[4729]: E0127 14:43:25.337248 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62a0134_758f_4404_b73c_77d7070bd4dd.slice/crio-55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd62a0134_758f_4404_b73c_77d7070bd4dd.slice/crio-conmon-55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da.scope\": RecentStats: unable to find data in memory cache]" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.461722 4729 generic.go:334] "Generic (PLEG): container finished" podID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerID="55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da" exitCode=0 Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.463399 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerDied","Data":"55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da"} Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.765776 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.861800 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fktw\" (UniqueName: \"kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw\") pod \"d62a0134-758f-4404-b73c-77d7070bd4dd\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.861863 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content\") pod \"d62a0134-758f-4404-b73c-77d7070bd4dd\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.862103 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities\") pod \"d62a0134-758f-4404-b73c-77d7070bd4dd\" (UID: \"d62a0134-758f-4404-b73c-77d7070bd4dd\") " Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.867351 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities" (OuterVolumeSpecName: "utilities") pod "d62a0134-758f-4404-b73c-77d7070bd4dd" (UID: "d62a0134-758f-4404-b73c-77d7070bd4dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.886082 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw" (OuterVolumeSpecName: "kube-api-access-4fktw") pod "d62a0134-758f-4404-b73c-77d7070bd4dd" (UID: "d62a0134-758f-4404-b73c-77d7070bd4dd"). InnerVolumeSpecName "kube-api-access-4fktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.927283 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d62a0134-758f-4404-b73c-77d7070bd4dd" (UID: "d62a0134-758f-4404-b73c-77d7070bd4dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.965725 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fktw\" (UniqueName: \"kubernetes.io/projected/d62a0134-758f-4404-b73c-77d7070bd4dd-kube-api-access-4fktw\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.965765 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:25 crc kubenswrapper[4729]: I0127 14:43:25.965777 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d62a0134-758f-4404-b73c-77d7070bd4dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.051673 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:43:26 crc kubenswrapper[4729]: E0127 14:43:26.052268 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.474450 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bqcfg" event={"ID":"d62a0134-758f-4404-b73c-77d7070bd4dd","Type":"ContainerDied","Data":"f4b4c96916289a1689d4344ace461159c3e9f7f54cad780ed4fb78316ded5341"} Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.474487 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bqcfg" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.474522 4729 scope.go:117] "RemoveContainer" containerID="55ec16633aa477690a03d0cdc34d79f9548e2249c11493522c0e287c6788f3da" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.510502 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.511939 4729 scope.go:117] "RemoveContainer" containerID="879b46222d499f149e39baee77a92e254fc562bcfe1092289e66e223462c9627" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.527489 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bqcfg"] Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.552865 4729 scope.go:117] "RemoveContainer" containerID="d3be9105aee6a7412521dceddec1e2f1b8777a45c7a77c913240017f683713dc" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.774794 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.775134 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:26 crc kubenswrapper[4729]: I0127 14:43:26.824991 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:27 crc kubenswrapper[4729]: I0127 14:43:27.579807 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:28 crc kubenswrapper[4729]: I0127 14:43:28.064699 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" path="/var/lib/kubelet/pods/d62a0134-758f-4404-b73c-77d7070bd4dd/volumes" Jan 27 14:43:28 crc kubenswrapper[4729]: I0127 14:43:28.872929 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:29 crc kubenswrapper[4729]: I0127 14:43:29.516785 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5q2xt" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="registry-server" containerID="cri-o://7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39" gracePeriod=2 Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.076526 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.171699 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities\") pod \"3051994b-1642-4725-9936-14f141ea6ed9\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.171956 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86br7\" (UniqueName: \"kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7\") pod \"3051994b-1642-4725-9936-14f141ea6ed9\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.172052 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content\") pod \"3051994b-1642-4725-9936-14f141ea6ed9\" (UID: \"3051994b-1642-4725-9936-14f141ea6ed9\") " Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.172501 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities" (OuterVolumeSpecName: "utilities") pod "3051994b-1642-4725-9936-14f141ea6ed9" (UID: "3051994b-1642-4725-9936-14f141ea6ed9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.174370 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.180269 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7" (OuterVolumeSpecName: "kube-api-access-86br7") pod "3051994b-1642-4725-9936-14f141ea6ed9" (UID: "3051994b-1642-4725-9936-14f141ea6ed9"). InnerVolumeSpecName "kube-api-access-86br7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.198520 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3051994b-1642-4725-9936-14f141ea6ed9" (UID: "3051994b-1642-4725-9936-14f141ea6ed9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.276985 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86br7\" (UniqueName: \"kubernetes.io/projected/3051994b-1642-4725-9936-14f141ea6ed9-kube-api-access-86br7\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.277019 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3051994b-1642-4725-9936-14f141ea6ed9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.533914 4729 generic.go:334] "Generic (PLEG): container finished" podID="3051994b-1642-4725-9936-14f141ea6ed9" containerID="7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39" exitCode=0 Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.533987 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5q2xt" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.534003 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerDied","Data":"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39"} Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.534075 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5q2xt" event={"ID":"3051994b-1642-4725-9936-14f141ea6ed9","Type":"ContainerDied","Data":"418d0a21d1a147824e85ddf3e0fb351ba00d7fc42e055229bf4104315fe75e8c"} Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.534106 4729 scope.go:117] "RemoveContainer" containerID="7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.562060 4729 scope.go:117] "RemoveContainer" containerID="5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.588915 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.597568 4729 scope.go:117] "RemoveContainer" containerID="e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.608194 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5q2xt"] Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.668868 4729 scope.go:117] "RemoveContainer" containerID="7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39" Jan 27 14:43:30 crc kubenswrapper[4729]: E0127 14:43:30.669529 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39\": container with ID starting with 7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39 not found: ID does not exist" containerID="7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.669646 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39"} err="failed to get container status \"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39\": rpc error: code = NotFound desc = could not find container \"7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39\": container with ID starting with 7fa22af388c3415ea65217d50251f1df3466cdf6d5e17c5bbd426c11b392ec39 not found: ID does not exist" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.669768 4729 scope.go:117] "RemoveContainer" containerID="5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6" Jan 27 14:43:30 crc kubenswrapper[4729]: E0127 14:43:30.670446 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6\": container with ID starting with 5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6 not found: ID does not exist" containerID="5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.670494 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6"} err="failed to get container status \"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6\": rpc error: code = NotFound desc = could not find container \"5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6\": container with ID starting with 5a2b104a2711dc909f1dd5dc4ac2db772f2a9ed471e69a93d8f6b82d5b5eb9d6 not found: ID does not exist" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.670529 4729 scope.go:117] "RemoveContainer" containerID="e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22" Jan 27 14:43:30 crc kubenswrapper[4729]: E0127 14:43:30.670851 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22\": container with ID starting with e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22 not found: ID does not exist" containerID="e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22" Jan 27 14:43:30 crc kubenswrapper[4729]: I0127 14:43:30.670883 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22"} err="failed to get container status \"e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22\": rpc error: code = NotFound desc = could not find container \"e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22\": container with ID starting with e021a9b505ed0b48a620234d3a20c9efcd82f55ae23f15b656a7d81e33230a22 not found: ID does not exist" Jan 27 14:43:32 crc kubenswrapper[4729]: I0127 14:43:32.063171 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3051994b-1642-4725-9936-14f141ea6ed9" path="/var/lib/kubelet/pods/3051994b-1642-4725-9936-14f141ea6ed9/volumes" Jan 27 14:43:37 crc kubenswrapper[4729]: I0127 14:43:37.047019 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mvsqd"] Jan 27 14:43:37 crc kubenswrapper[4729]: I0127 14:43:37.060703 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mvsqd"] Jan 27 14:43:38 crc kubenswrapper[4729]: I0127 14:43:38.063142 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2d8fdf-9710-4e95-a733-8ce7f61951eb" path="/var/lib/kubelet/pods/1f2d8fdf-9710-4e95-a733-8ce7f61951eb/volumes" Jan 27 14:43:39 crc kubenswrapper[4729]: I0127 14:43:39.051096 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:43:39 crc kubenswrapper[4729]: E0127 14:43:39.051866 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:43:50 crc kubenswrapper[4729]: I0127 14:43:50.046815 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hw6n4"] Jan 27 14:43:50 crc kubenswrapper[4729]: I0127 14:43:50.053549 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:43:50 crc kubenswrapper[4729]: E0127 14:43:50.054200 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:43:50 crc kubenswrapper[4729]: I0127 14:43:50.101437 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hw6n4"] Jan 27 14:43:52 crc kubenswrapper[4729]: I0127 14:43:52.064948 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd20345f-846e-4b32-ae20-dcfd968b207d" path="/var/lib/kubelet/pods/bd20345f-846e-4b32-ae20-dcfd968b207d/volumes" Jan 27 14:43:52 crc kubenswrapper[4729]: I0127 14:43:52.805641 4729 generic.go:334] "Generic (PLEG): container finished" podID="9795f0ec-6b8d-4470-bd63-584192019fcf" containerID="d009b4bac31d3e8d37b2ee2a157d6364e0f6cb2c5059280a3d1617ef4b1b6b78" exitCode=0 Jan 27 14:43:52 crc kubenswrapper[4729]: I0127 14:43:52.805789 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" event={"ID":"9795f0ec-6b8d-4470-bd63-584192019fcf","Type":"ContainerDied","Data":"d009b4bac31d3e8d37b2ee2a157d6364e0f6cb2c5059280a3d1617ef4b1b6b78"} Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.314264 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.407112 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle\") pod \"9795f0ec-6b8d-4470-bd63-584192019fcf\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.407452 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam\") pod \"9795f0ec-6b8d-4470-bd63-584192019fcf\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.407581 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory\") pod \"9795f0ec-6b8d-4470-bd63-584192019fcf\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.407674 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvz79\" (UniqueName: \"kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79\") pod \"9795f0ec-6b8d-4470-bd63-584192019fcf\" (UID: \"9795f0ec-6b8d-4470-bd63-584192019fcf\") " Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.424276 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79" (OuterVolumeSpecName: "kube-api-access-dvz79") pod "9795f0ec-6b8d-4470-bd63-584192019fcf" (UID: "9795f0ec-6b8d-4470-bd63-584192019fcf"). InnerVolumeSpecName "kube-api-access-dvz79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.425215 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9795f0ec-6b8d-4470-bd63-584192019fcf" (UID: "9795f0ec-6b8d-4470-bd63-584192019fcf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.442292 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9795f0ec-6b8d-4470-bd63-584192019fcf" (UID: "9795f0ec-6b8d-4470-bd63-584192019fcf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.444160 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory" (OuterVolumeSpecName: "inventory") pod "9795f0ec-6b8d-4470-bd63-584192019fcf" (UID: "9795f0ec-6b8d-4470-bd63-584192019fcf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.510581 4729 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.510627 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.510637 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795f0ec-6b8d-4470-bd63-584192019fcf-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.510646 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvz79\" (UniqueName: \"kubernetes.io/projected/9795f0ec-6b8d-4470-bd63-584192019fcf-kube-api-access-dvz79\") on node \"crc\" DevicePath \"\"" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.832382 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" event={"ID":"9795f0ec-6b8d-4470-bd63-584192019fcf","Type":"ContainerDied","Data":"11b59732d28569d13fcc09a106af41d76604652a723f0ef0754ae9ab2eb68276"} Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.832420 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b59732d28569d13fcc09a106af41d76604652a723f0ef0754ae9ab2eb68276" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.832899 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.932800 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g"] Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933537 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933569 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933591 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9795f0ec-6b8d-4470-bd63-584192019fcf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933602 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="9795f0ec-6b8d-4470-bd63-584192019fcf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933625 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933650 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933665 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="extract-content" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933672 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="extract-content" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933686 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="extract-utilities" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933692 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="extract-utilities" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933720 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="extract-content" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933727 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="extract-content" Jan 27 14:43:54 crc kubenswrapper[4729]: E0127 14:43:54.933743 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="extract-utilities" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.933772 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="extract-utilities" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.934052 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d62a0134-758f-4404-b73c-77d7070bd4dd" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.934069 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3051994b-1642-4725-9936-14f141ea6ed9" containerName="registry-server" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.934077 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="9795f0ec-6b8d-4470-bd63-584192019fcf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.934944 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.939291 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.939588 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.943267 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.943397 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:43:54 crc kubenswrapper[4729]: I0127 14:43:54.944837 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g"] Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.022784 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.023040 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8rtf\" (UniqueName: \"kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.023135 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.037021 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-whv27"] Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.049516 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-whv27"] Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.124986 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8rtf\" (UniqueName: \"kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.125441 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.125498 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.130981 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.133411 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.143154 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8rtf\" (UniqueName: \"kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:55 crc kubenswrapper[4729]: I0127 14:43:55.260583 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:43:56 crc kubenswrapper[4729]: I0127 14:43:55.880557 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g"] Jan 27 14:43:56 crc kubenswrapper[4729]: I0127 14:43:56.078326 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cac17ab6-7586-488e-b302-d3bf641b56ab" path="/var/lib/kubelet/pods/cac17ab6-7586-488e-b302-d3bf641b56ab/volumes" Jan 27 14:43:56 crc kubenswrapper[4729]: I0127 14:43:56.856716 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" event={"ID":"657d96d8-d313-4860-acae-64d35608cd5d","Type":"ContainerStarted","Data":"41af27a3542d2660d1a71cadc93da9b560ba1a9153845f3ee5a152a0278b6c1b"} Jan 27 14:43:57 crc kubenswrapper[4729]: I0127 14:43:57.869991 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" event={"ID":"657d96d8-d313-4860-acae-64d35608cd5d","Type":"ContainerStarted","Data":"2875a6be213942291a0cf07a250a97db9b5c1bc78e49e76dd90cda2a7aaa1653"} Jan 27 14:43:57 crc kubenswrapper[4729]: I0127 14:43:57.896789 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" podStartSLOduration=2.807751282 podStartE2EDuration="3.896772328s" podCreationTimestamp="2026-01-27 14:43:54 +0000 UTC" firstStartedPulling="2026-01-27 14:43:55.884838938 +0000 UTC m=+2322.469029942" lastFinishedPulling="2026-01-27 14:43:56.973859984 +0000 UTC m=+2323.558050988" observedRunningTime="2026-01-27 14:43:57.89085574 +0000 UTC m=+2324.475046744" watchObservedRunningTime="2026-01-27 14:43:57.896772328 +0000 UTC m=+2324.480963332" Jan 27 14:44:04 crc kubenswrapper[4729]: I0127 14:44:04.060783 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:44:04 crc kubenswrapper[4729]: E0127 14:44:04.061530 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:44:07 crc kubenswrapper[4729]: I0127 14:44:07.045104 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6t9vl"] Jan 27 14:44:07 crc kubenswrapper[4729]: I0127 14:44:07.056744 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6t9vl"] Jan 27 14:44:08 crc kubenswrapper[4729]: I0127 14:44:08.064634 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99c157a2-c017-4ace-bff5-50dfef32c990" path="/var/lib/kubelet/pods/99c157a2-c017-4ace-bff5-50dfef32c990/volumes" Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.065479 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-f9tpr"] Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.076399 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-f9tpr"] Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.731723 4729 scope.go:117] "RemoveContainer" containerID="5a19886819a410481f2f8bb7cfe997fd5637aba3651349a06580e98e101201a5" Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.766249 4729 scope.go:117] "RemoveContainer" containerID="224b7cab4db35f3a104d11bb9e59d9174f138764f01f9114485effdcea39f86b" Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.816837 4729 scope.go:117] "RemoveContainer" containerID="b2b782c35a7c78c09c5bd3b2ae2b768ed5bf1923baed5a33e7aee56cb34c2893" Jan 27 14:44:10 crc kubenswrapper[4729]: I0127 14:44:10.891729 4729 scope.go:117] "RemoveContainer" containerID="ac89ba3f1e3170ea6990200b8ed3951dbcafe205c30ec887514d6ef579287b51" Jan 27 14:44:12 crc kubenswrapper[4729]: I0127 14:44:12.158992 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e4c134-9472-463f-b7be-226acbf7954b" path="/var/lib/kubelet/pods/f7e4c134-9472-463f-b7be-226acbf7954b/volumes" Jan 27 14:44:16 crc kubenswrapper[4729]: I0127 14:44:16.034900 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dh77n"] Jan 27 14:44:16 crc kubenswrapper[4729]: I0127 14:44:16.047728 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dh77n"] Jan 27 14:44:16 crc kubenswrapper[4729]: I0127 14:44:16.066832 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36bba371-e800-414e-8523-51e905e6d074" path="/var/lib/kubelet/pods/36bba371-e800-414e-8523-51e905e6d074/volumes" Jan 27 14:44:17 crc kubenswrapper[4729]: I0127 14:44:17.051838 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:44:17 crc kubenswrapper[4729]: E0127 14:44:17.052178 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:44:30 crc kubenswrapper[4729]: I0127 14:44:30.053185 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:44:30 crc kubenswrapper[4729]: E0127 14:44:30.054191 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:44:42 crc kubenswrapper[4729]: I0127 14:44:42.051705 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:44:42 crc kubenswrapper[4729]: E0127 14:44:42.052632 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:44:57 crc kubenswrapper[4729]: I0127 14:44:57.051833 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:44:57 crc kubenswrapper[4729]: E0127 14:44:57.052639 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.154126 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j"] Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.156623 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.158544 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.158554 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.169848 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j"] Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.329689 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjl57\" (UniqueName: \"kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.329747 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.329818 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.432161 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjl57\" (UniqueName: \"kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.432253 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.432402 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.433543 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.448672 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.449579 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjl57\" (UniqueName: \"kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57\") pod \"collect-profiles-29492085-76c9j\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:00 crc kubenswrapper[4729]: I0127 14:45:00.486105 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:01 crc kubenswrapper[4729]: I0127 14:45:01.011610 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j"] Jan 27 14:45:01 crc kubenswrapper[4729]: I0127 14:45:01.638518 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" event={"ID":"b861f842-f980-4152-abe9-41e22094537b","Type":"ContainerStarted","Data":"b09654177877401525865a2b8c329af3c746dec04de898e342e072b5ec40ce20"} Jan 27 14:45:01 crc kubenswrapper[4729]: I0127 14:45:01.640172 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" event={"ID":"b861f842-f980-4152-abe9-41e22094537b","Type":"ContainerStarted","Data":"ac437138027430cb529a99ad91d0f3fcb2ca815b5ba72997c1882917b8963b7a"} Jan 27 14:45:01 crc kubenswrapper[4729]: I0127 14:45:01.659688 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" podStartSLOduration=1.659665926 podStartE2EDuration="1.659665926s" podCreationTimestamp="2026-01-27 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 14:45:01.65459335 +0000 UTC m=+2388.238784364" watchObservedRunningTime="2026-01-27 14:45:01.659665926 +0000 UTC m=+2388.243856920" Jan 27 14:45:02 crc kubenswrapper[4729]: I0127 14:45:02.650922 4729 generic.go:334] "Generic (PLEG): container finished" podID="b861f842-f980-4152-abe9-41e22094537b" containerID="b09654177877401525865a2b8c329af3c746dec04de898e342e072b5ec40ce20" exitCode=0 Jan 27 14:45:02 crc kubenswrapper[4729]: I0127 14:45:02.650972 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" event={"ID":"b861f842-f980-4152-abe9-41e22094537b","Type":"ContainerDied","Data":"b09654177877401525865a2b8c329af3c746dec04de898e342e072b5ec40ce20"} Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.193182 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.224836 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjl57\" (UniqueName: \"kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57\") pod \"b861f842-f980-4152-abe9-41e22094537b\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.224912 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume\") pod \"b861f842-f980-4152-abe9-41e22094537b\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.225152 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume\") pod \"b861f842-f980-4152-abe9-41e22094537b\" (UID: \"b861f842-f980-4152-abe9-41e22094537b\") " Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.229404 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b861f842-f980-4152-abe9-41e22094537b" (UID: "b861f842-f980-4152-abe9-41e22094537b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.232349 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b861f842-f980-4152-abe9-41e22094537b" (UID: "b861f842-f980-4152-abe9-41e22094537b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.234124 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57" (OuterVolumeSpecName: "kube-api-access-xjl57") pod "b861f842-f980-4152-abe9-41e22094537b" (UID: "b861f842-f980-4152-abe9-41e22094537b"). InnerVolumeSpecName "kube-api-access-xjl57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.328004 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjl57\" (UniqueName: \"kubernetes.io/projected/b861f842-f980-4152-abe9-41e22094537b-kube-api-access-xjl57\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.328045 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b861f842-f980-4152-abe9-41e22094537b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.328057 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b861f842-f980-4152-abe9-41e22094537b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.676896 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" event={"ID":"b861f842-f980-4152-abe9-41e22094537b","Type":"ContainerDied","Data":"ac437138027430cb529a99ad91d0f3fcb2ca815b5ba72997c1882917b8963b7a"} Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.677220 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac437138027430cb529a99ad91d0f3fcb2ca815b5ba72997c1882917b8963b7a" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.676948 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j" Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.739828 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv"] Jan 27 14:45:04 crc kubenswrapper[4729]: I0127 14:45:04.751516 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492040-d92vv"] Jan 27 14:45:06 crc kubenswrapper[4729]: I0127 14:45:06.065517 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59601bb-7561-4555-99e9-0e6faf392716" path="/var/lib/kubelet/pods/e59601bb-7561-4555-99e9-0e6faf392716/volumes" Jan 27 14:45:11 crc kubenswrapper[4729]: I0127 14:45:11.071075 4729 scope.go:117] "RemoveContainer" containerID="8cbf1ffe2c846c7fddb8115a76cdfe2c3b5726529c1f610e94be15ff9e57dc17" Jan 27 14:45:11 crc kubenswrapper[4729]: I0127 14:45:11.126068 4729 scope.go:117] "RemoveContainer" containerID="f8d1d0b8899d63924602beb57acdf6f382a18ed9a1aac9e316e063e1a73b56b1" Jan 27 14:45:11 crc kubenswrapper[4729]: I0127 14:45:11.196246 4729 scope.go:117] "RemoveContainer" containerID="7107d07701c55bd904425d30531252a639609fa7aec898ce377fada986bc5b1c" Jan 27 14:45:12 crc kubenswrapper[4729]: I0127 14:45:12.051002 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:45:12 crc kubenswrapper[4729]: E0127 14:45:12.051371 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:45:22 crc kubenswrapper[4729]: I0127 14:45:22.034533 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zvnjb"] Jan 27 14:45:22 crc kubenswrapper[4729]: I0127 14:45:22.046782 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-krsxm"] Jan 27 14:45:22 crc kubenswrapper[4729]: I0127 14:45:22.065740 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-krsxm"] Jan 27 14:45:22 crc kubenswrapper[4729]: I0127 14:45:22.073275 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zvnjb"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.048545 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-100f-account-create-update-l78l7"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.066166 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4af9-account-create-update-c7f5v"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.082198 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-226hb"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.094949 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-996c-account-create-update-vkz4r"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.109154 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4af9-account-create-update-c7f5v"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.125366 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-100f-account-create-update-l78l7"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.139251 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-226hb"] Jan 27 14:45:23 crc kubenswrapper[4729]: I0127 14:45:23.151545 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-996c-account-create-update-vkz4r"] Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.071544 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7126250f-09fc-45d2-ba39-636094d89da7" path="/var/lib/kubelet/pods/7126250f-09fc-45d2-ba39-636094d89da7/volumes" Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.072771 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b863055-1278-4ddd-87fc-5eb337ec92b0" path="/var/lib/kubelet/pods/8b863055-1278-4ddd-87fc-5eb337ec92b0/volumes" Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.073833 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b53e6829-f0c7-4c38-b258-42230df58947" path="/var/lib/kubelet/pods/b53e6829-f0c7-4c38-b258-42230df58947/volumes" Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.075974 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c205222a-ce44-4153-816d-edc3ec8f8240" path="/var/lib/kubelet/pods/c205222a-ce44-4153-816d-edc3ec8f8240/volumes" Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.077004 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28f95b9-03fc-42f0-aa6b-bee0ebbdefce" path="/var/lib/kubelet/pods/e28f95b9-03fc-42f0-aa6b-bee0ebbdefce/volumes" Jan 27 14:45:24 crc kubenswrapper[4729]: I0127 14:45:24.077765 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea0fb13c-3315-42a9-9fdc-14492e98546f" path="/var/lib/kubelet/pods/ea0fb13c-3315-42a9-9fdc-14492e98546f/volumes" Jan 27 14:45:27 crc kubenswrapper[4729]: I0127 14:45:27.051573 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:45:27 crc kubenswrapper[4729]: E0127 14:45:27.052330 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:45:38 crc kubenswrapper[4729]: I0127 14:45:38.052783 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:45:38 crc kubenswrapper[4729]: E0127 14:45:38.053645 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:45:50 crc kubenswrapper[4729]: I0127 14:45:50.052008 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:45:50 crc kubenswrapper[4729]: E0127 14:45:50.053083 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:46:00 crc kubenswrapper[4729]: I0127 14:46:00.085630 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7xp9"] Jan 27 14:46:00 crc kubenswrapper[4729]: I0127 14:46:00.112825 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7xp9"] Jan 27 14:46:02 crc kubenswrapper[4729]: I0127 14:46:02.057798 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:46:02 crc kubenswrapper[4729]: E0127 14:46:02.058148 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:46:02 crc kubenswrapper[4729]: I0127 14:46:02.067278 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f535d0a-0620-44b3-84ba-c2119fa90330" path="/var/lib/kubelet/pods/2f535d0a-0620-44b3-84ba-c2119fa90330/volumes" Jan 27 14:46:09 crc kubenswrapper[4729]: I0127 14:46:09.030764 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-xnfdh"] Jan 27 14:46:09 crc kubenswrapper[4729]: I0127 14:46:09.041483 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-026c-account-create-update-ljl8z"] Jan 27 14:46:09 crc kubenswrapper[4729]: I0127 14:46:09.058494 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-xnfdh"] Jan 27 14:46:09 crc kubenswrapper[4729]: I0127 14:46:09.073739 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-026c-account-create-update-ljl8z"] Jan 27 14:46:10 crc kubenswrapper[4729]: I0127 14:46:10.065788 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="987a64e5-85e5-4521-aa1a-8fb88f02e246" path="/var/lib/kubelet/pods/987a64e5-85e5-4521-aa1a-8fb88f02e246/volumes" Jan 27 14:46:10 crc kubenswrapper[4729]: I0127 14:46:10.067109 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a985e50a-2e8c-4e6f-8fc5-a24c20cc7130" path="/var/lib/kubelet/pods/a985e50a-2e8c-4e6f-8fc5-a24c20cc7130/volumes" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.327041 4729 scope.go:117] "RemoveContainer" containerID="43d0bab29f64ac230df67977d9b8d1484730ac0c2c3b6320e6d4e30ed4049352" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.355767 4729 scope.go:117] "RemoveContainer" containerID="52f60d42a7c562fa45bc4ab6b659c8a0829890da72d35bfa36ae4428cc59292d" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.428690 4729 scope.go:117] "RemoveContainer" containerID="2bfee60ed64c068605898eb84da911a4917307db6f876ba08e2d13c1701e27a9" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.494363 4729 scope.go:117] "RemoveContainer" containerID="ef969febfa0aa9fdd00cd05a04646b84a79b0b63815490af282e0c0a08175ca7" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.567588 4729 scope.go:117] "RemoveContainer" containerID="eaffbc34719ccfbda5bcef814dcc2d99bb95da1c34b7996c6662d02e2a4efec3" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.618990 4729 scope.go:117] "RemoveContainer" containerID="88e37e5a6b9ea8a0931cf632b3a09663b478e2320c3a80caded8b2847f21e5c9" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.690958 4729 scope.go:117] "RemoveContainer" containerID="4f7ecbc6bb3b3f3a94d9cd054d36fbdc6de58a6eaec3c3044acf4754f69c8dd7" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.713630 4729 scope.go:117] "RemoveContainer" containerID="f974db68f0ca2dea7e3810d97492d147a0990c8ca4585933b6e57b7199982326" Jan 27 14:46:11 crc kubenswrapper[4729]: I0127 14:46:11.745430 4729 scope.go:117] "RemoveContainer" containerID="2d9967418ed1582b15bfd89241e6e200734128faed4093efb58ac1f52c387e3d" Jan 27 14:46:16 crc kubenswrapper[4729]: I0127 14:46:16.051373 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:46:16 crc kubenswrapper[4729]: E0127 14:46:16.052272 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:46:16 crc kubenswrapper[4729]: I0127 14:46:16.446349 4729 generic.go:334] "Generic (PLEG): container finished" podID="657d96d8-d313-4860-acae-64d35608cd5d" containerID="2875a6be213942291a0cf07a250a97db9b5c1bc78e49e76dd90cda2a7aaa1653" exitCode=0 Jan 27 14:46:16 crc kubenswrapper[4729]: I0127 14:46:16.446395 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" event={"ID":"657d96d8-d313-4860-acae-64d35608cd5d","Type":"ContainerDied","Data":"2875a6be213942291a0cf07a250a97db9b5c1bc78e49e76dd90cda2a7aaa1653"} Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.054580 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.101839 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam\") pod \"657d96d8-d313-4860-acae-64d35608cd5d\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.101984 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory\") pod \"657d96d8-d313-4860-acae-64d35608cd5d\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.102062 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8rtf\" (UniqueName: \"kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf\") pod \"657d96d8-d313-4860-acae-64d35608cd5d\" (UID: \"657d96d8-d313-4860-acae-64d35608cd5d\") " Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.113351 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf" (OuterVolumeSpecName: "kube-api-access-s8rtf") pod "657d96d8-d313-4860-acae-64d35608cd5d" (UID: "657d96d8-d313-4860-acae-64d35608cd5d"). InnerVolumeSpecName "kube-api-access-s8rtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.149994 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory" (OuterVolumeSpecName: "inventory") pod "657d96d8-d313-4860-acae-64d35608cd5d" (UID: "657d96d8-d313-4860-acae-64d35608cd5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.150898 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "657d96d8-d313-4860-acae-64d35608cd5d" (UID: "657d96d8-d313-4860-acae-64d35608cd5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.205410 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.205451 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/657d96d8-d313-4860-acae-64d35608cd5d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.205464 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8rtf\" (UniqueName: \"kubernetes.io/projected/657d96d8-d313-4860-acae-64d35608cd5d-kube-api-access-s8rtf\") on node \"crc\" DevicePath \"\"" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.468603 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" event={"ID":"657d96d8-d313-4860-acae-64d35608cd5d","Type":"ContainerDied","Data":"41af27a3542d2660d1a71cadc93da9b560ba1a9153845f3ee5a152a0278b6c1b"} Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.468924 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41af27a3542d2660d1a71cadc93da9b560ba1a9153845f3ee5a152a0278b6c1b" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.468674 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.559903 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7"] Jan 27 14:46:18 crc kubenswrapper[4729]: E0127 14:46:18.560449 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657d96d8-d313-4860-acae-64d35608cd5d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.560465 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="657d96d8-d313-4860-acae-64d35608cd5d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:18 crc kubenswrapper[4729]: E0127 14:46:18.560506 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b861f842-f980-4152-abe9-41e22094537b" containerName="collect-profiles" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.560513 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="b861f842-f980-4152-abe9-41e22094537b" containerName="collect-profiles" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.560739 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="b861f842-f980-4152-abe9-41e22094537b" containerName="collect-profiles" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.560763 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="657d96d8-d313-4860-acae-64d35608cd5d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.561587 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.564465 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.564725 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.566460 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.574407 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.578702 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7"] Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.614968 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fjd\" (UniqueName: \"kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.615566 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.615714 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.717547 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.717622 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fjd\" (UniqueName: \"kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.717845 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.723715 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.723818 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.772684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fjd\" (UniqueName: \"kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-28rl7\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:18 crc kubenswrapper[4729]: I0127 14:46:18.887444 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:46:19 crc kubenswrapper[4729]: I0127 14:46:19.558344 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:46:19 crc kubenswrapper[4729]: I0127 14:46:19.564525 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7"] Jan 27 14:46:20 crc kubenswrapper[4729]: I0127 14:46:20.497636 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" event={"ID":"32728732-3b43-4a8e-9f61-f028fd4b3d74","Type":"ContainerStarted","Data":"9b96d1e26ca9d5dfff62a1b9e2e18f995daaf404235259c19e74c0a9fb6dcacb"} Jan 27 14:46:20 crc kubenswrapper[4729]: I0127 14:46:20.498313 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" event={"ID":"32728732-3b43-4a8e-9f61-f028fd4b3d74","Type":"ContainerStarted","Data":"6dc2511b2e23e83169379d85fee087a71dd18af7a4366b3ead2942ea5bbb6bbc"} Jan 27 14:46:20 crc kubenswrapper[4729]: I0127 14:46:20.523999 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" podStartSLOduration=1.967444644 podStartE2EDuration="2.523984027s" podCreationTimestamp="2026-01-27 14:46:18 +0000 UTC" firstStartedPulling="2026-01-27 14:46:19.558084655 +0000 UTC m=+2466.142275659" lastFinishedPulling="2026-01-27 14:46:20.114624038 +0000 UTC m=+2466.698815042" observedRunningTime="2026-01-27 14:46:20.522840339 +0000 UTC m=+2467.107031353" watchObservedRunningTime="2026-01-27 14:46:20.523984027 +0000 UTC m=+2467.108175031" Jan 27 14:46:25 crc kubenswrapper[4729]: I0127 14:46:25.035661 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8g82q"] Jan 27 14:46:25 crc kubenswrapper[4729]: I0127 14:46:25.047290 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8g82q"] Jan 27 14:46:26 crc kubenswrapper[4729]: I0127 14:46:26.064090 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33365ad-887a-4c00-83a7-419a7f002d92" path="/var/lib/kubelet/pods/d33365ad-887a-4c00-83a7-419a7f002d92/volumes" Jan 27 14:46:29 crc kubenswrapper[4729]: I0127 14:46:29.052476 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:46:29 crc kubenswrapper[4729]: E0127 14:46:29.053322 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:46:41 crc kubenswrapper[4729]: I0127 14:46:41.051902 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:46:41 crc kubenswrapper[4729]: E0127 14:46:41.052856 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:46:43 crc kubenswrapper[4729]: I0127 14:46:43.052028 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bkmjh"] Jan 27 14:46:43 crc kubenswrapper[4729]: I0127 14:46:43.062939 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bkmjh"] Jan 27 14:46:44 crc kubenswrapper[4729]: I0127 14:46:44.063362 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac4d439-3f94-4fc7-bd2f-3b39c25a5987" path="/var/lib/kubelet/pods/fac4d439-3f94-4fc7-bd2f-3b39c25a5987/volumes" Jan 27 14:46:54 crc kubenswrapper[4729]: I0127 14:46:54.051386 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:46:54 crc kubenswrapper[4729]: E0127 14:46:54.052209 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:47:05 crc kubenswrapper[4729]: I0127 14:47:05.051024 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:47:05 crc kubenswrapper[4729]: E0127 14:47:05.051691 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:47:12 crc kubenswrapper[4729]: I0127 14:47:12.006056 4729 scope.go:117] "RemoveContainer" containerID="b26bfeb0557ccd8edf7f132feb45fd03133c55bd1df947058871e9853008ed1b" Jan 27 14:47:12 crc kubenswrapper[4729]: I0127 14:47:12.069772 4729 scope.go:117] "RemoveContainer" containerID="506100e1d54c9723e7b2132698324a6df8d2f923f8ac18d84bd20c225e57c48d" Jan 27 14:47:17 crc kubenswrapper[4729]: I0127 14:47:17.052159 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:47:17 crc kubenswrapper[4729]: E0127 14:47:17.053547 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:47:21 crc kubenswrapper[4729]: I0127 14:47:21.047356 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wds9p"] Jan 27 14:47:21 crc kubenswrapper[4729]: I0127 14:47:21.057356 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wds9p"] Jan 27 14:47:22 crc kubenswrapper[4729]: I0127 14:47:22.064716 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582e9a31-a273-45ab-a05f-9bacd55948d6" path="/var/lib/kubelet/pods/582e9a31-a273-45ab-a05f-9bacd55948d6/volumes" Jan 27 14:47:29 crc kubenswrapper[4729]: I0127 14:47:29.051128 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:47:29 crc kubenswrapper[4729]: I0127 14:47:29.349191 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d"} Jan 27 14:47:34 crc kubenswrapper[4729]: I0127 14:47:34.402224 4729 generic.go:334] "Generic (PLEG): container finished" podID="32728732-3b43-4a8e-9f61-f028fd4b3d74" containerID="9b96d1e26ca9d5dfff62a1b9e2e18f995daaf404235259c19e74c0a9fb6dcacb" exitCode=0 Jan 27 14:47:34 crc kubenswrapper[4729]: I0127 14:47:34.402348 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" event={"ID":"32728732-3b43-4a8e-9f61-f028fd4b3d74","Type":"ContainerDied","Data":"9b96d1e26ca9d5dfff62a1b9e2e18f995daaf404235259c19e74c0a9fb6dcacb"} Jan 27 14:47:35 crc kubenswrapper[4729]: I0127 14:47:35.942427 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.046666 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory\") pod \"32728732-3b43-4a8e-9f61-f028fd4b3d74\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.046811 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam\") pod \"32728732-3b43-4a8e-9f61-f028fd4b3d74\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.046967 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fjd\" (UniqueName: \"kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd\") pod \"32728732-3b43-4a8e-9f61-f028fd4b3d74\" (UID: \"32728732-3b43-4a8e-9f61-f028fd4b3d74\") " Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.053307 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd" (OuterVolumeSpecName: "kube-api-access-h9fjd") pod "32728732-3b43-4a8e-9f61-f028fd4b3d74" (UID: "32728732-3b43-4a8e-9f61-f028fd4b3d74"). InnerVolumeSpecName "kube-api-access-h9fjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.092795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "32728732-3b43-4a8e-9f61-f028fd4b3d74" (UID: "32728732-3b43-4a8e-9f61-f028fd4b3d74"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.096322 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory" (OuterVolumeSpecName: "inventory") pod "32728732-3b43-4a8e-9f61-f028fd4b3d74" (UID: "32728732-3b43-4a8e-9f61-f028fd4b3d74"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.149740 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.149766 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/32728732-3b43-4a8e-9f61-f028fd4b3d74-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.149777 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fjd\" (UniqueName: \"kubernetes.io/projected/32728732-3b43-4a8e-9f61-f028fd4b3d74-kube-api-access-h9fjd\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.429471 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" event={"ID":"32728732-3b43-4a8e-9f61-f028fd4b3d74","Type":"ContainerDied","Data":"6dc2511b2e23e83169379d85fee087a71dd18af7a4366b3ead2942ea5bbb6bbc"} Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.429852 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc2511b2e23e83169379d85fee087a71dd18af7a4366b3ead2942ea5bbb6bbc" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.429531 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-28rl7" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.587776 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m"] Jan 27 14:47:36 crc kubenswrapper[4729]: E0127 14:47:36.588456 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32728732-3b43-4a8e-9f61-f028fd4b3d74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.588478 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="32728732-3b43-4a8e-9f61-f028fd4b3d74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.588714 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="32728732-3b43-4a8e-9f61-f028fd4b3d74" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.589634 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.597715 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.598047 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.598233 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.598281 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.599771 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m"] Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.765472 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.765843 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.766008 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6k8v\" (UniqueName: \"kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.868861 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.869059 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6k8v\" (UniqueName: \"kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.869160 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.873460 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.877603 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.887375 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6k8v\" (UniqueName: \"kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:36 crc kubenswrapper[4729]: I0127 14:47:36.928089 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:37 crc kubenswrapper[4729]: I0127 14:47:37.463124 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m"] Jan 27 14:47:38 crc kubenswrapper[4729]: I0127 14:47:38.457076 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" event={"ID":"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f","Type":"ContainerStarted","Data":"53d12c32f2ab230f1d297f7dc81d0210eb7d88915b4f672de192b4334c87c238"} Jan 27 14:47:38 crc kubenswrapper[4729]: I0127 14:47:38.457609 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" event={"ID":"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f","Type":"ContainerStarted","Data":"215587920b2793dfc09a3b5b98ea30dba3fc951fcd66137a701e486059ade481"} Jan 27 14:47:43 crc kubenswrapper[4729]: I0127 14:47:43.513154 4729 generic.go:334] "Generic (PLEG): container finished" podID="78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" containerID="53d12c32f2ab230f1d297f7dc81d0210eb7d88915b4f672de192b4334c87c238" exitCode=0 Jan 27 14:47:43 crc kubenswrapper[4729]: I0127 14:47:43.513241 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" event={"ID":"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f","Type":"ContainerDied","Data":"53d12c32f2ab230f1d297f7dc81d0210eb7d88915b4f672de192b4334c87c238"} Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.032127 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.107154 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam\") pod \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.107424 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6k8v\" (UniqueName: \"kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v\") pod \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.107463 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory\") pod \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\" (UID: \"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f\") " Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.115869 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v" (OuterVolumeSpecName: "kube-api-access-t6k8v") pod "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" (UID: "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f"). InnerVolumeSpecName "kube-api-access-t6k8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.143182 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory" (OuterVolumeSpecName: "inventory") pod "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" (UID: "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.147420 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" (UID: "78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.210296 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.210329 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6k8v\" (UniqueName: \"kubernetes.io/projected/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-kube-api-access-t6k8v\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.210371 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.581303 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" event={"ID":"78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f","Type":"ContainerDied","Data":"215587920b2793dfc09a3b5b98ea30dba3fc951fcd66137a701e486059ade481"} Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.581354 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215587920b2793dfc09a3b5b98ea30dba3fc951fcd66137a701e486059ade481" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.581436 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.618168 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr"] Jan 27 14:47:45 crc kubenswrapper[4729]: E0127 14:47:45.618708 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.618729 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.619057 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.620028 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.623556 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.623802 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.624017 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.624241 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.632004 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr"] Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.721729 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.721775 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.722310 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd997\" (UniqueName: \"kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.824668 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd997\" (UniqueName: \"kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.824803 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.824828 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.834841 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.834920 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.841990 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd997\" (UniqueName: \"kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7dfxr\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:45 crc kubenswrapper[4729]: I0127 14:47:45.958143 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:47:46 crc kubenswrapper[4729]: I0127 14:47:46.516214 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr"] Jan 27 14:47:46 crc kubenswrapper[4729]: W0127 14:47:46.521859 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebfb952d_e5d5_4ce8_9eb7_49f058023970.slice/crio-7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a WatchSource:0}: Error finding container 7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a: Status 404 returned error can't find the container with id 7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a Jan 27 14:47:46 crc kubenswrapper[4729]: I0127 14:47:46.592180 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" event={"ID":"ebfb952d-e5d5-4ce8-9eb7-49f058023970","Type":"ContainerStarted","Data":"7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a"} Jan 27 14:47:48 crc kubenswrapper[4729]: I0127 14:47:48.618197 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" event={"ID":"ebfb952d-e5d5-4ce8-9eb7-49f058023970","Type":"ContainerStarted","Data":"87d7204a8e7857754293502ad1f18ce3c90fbdf193b455f97dddb1db10eacbe3"} Jan 27 14:47:48 crc kubenswrapper[4729]: I0127 14:47:48.646978 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" podStartSLOduration=2.6503165859999998 podStartE2EDuration="3.646922295s" podCreationTimestamp="2026-01-27 14:47:45 +0000 UTC" firstStartedPulling="2026-01-27 14:47:46.52485645 +0000 UTC m=+2553.109047454" lastFinishedPulling="2026-01-27 14:47:47.521462159 +0000 UTC m=+2554.105653163" observedRunningTime="2026-01-27 14:47:48.636962617 +0000 UTC m=+2555.221153621" watchObservedRunningTime="2026-01-27 14:47:48.646922295 +0000 UTC m=+2555.231113319" Jan 27 14:48:12 crc kubenswrapper[4729]: I0127 14:48:12.156227 4729 scope.go:117] "RemoveContainer" containerID="0431cc85fe0c45d29497b70a5d41aa363bf7cba9535e3fac8d6071b24ddbcd5f" Jan 27 14:48:27 crc kubenswrapper[4729]: I0127 14:48:27.012352 4729 generic.go:334] "Generic (PLEG): container finished" podID="ebfb952d-e5d5-4ce8-9eb7-49f058023970" containerID="87d7204a8e7857754293502ad1f18ce3c90fbdf193b455f97dddb1db10eacbe3" exitCode=0 Jan 27 14:48:27 crc kubenswrapper[4729]: I0127 14:48:27.012498 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" event={"ID":"ebfb952d-e5d5-4ce8-9eb7-49f058023970","Type":"ContainerDied","Data":"87d7204a8e7857754293502ad1f18ce3c90fbdf193b455f97dddb1db10eacbe3"} Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.561464 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.705129 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam\") pod \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.705212 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jd997\" (UniqueName: \"kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997\") pod \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.705323 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory\") pod \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\" (UID: \"ebfb952d-e5d5-4ce8-9eb7-49f058023970\") " Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.713144 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997" (OuterVolumeSpecName: "kube-api-access-jd997") pod "ebfb952d-e5d5-4ce8-9eb7-49f058023970" (UID: "ebfb952d-e5d5-4ce8-9eb7-49f058023970"). InnerVolumeSpecName "kube-api-access-jd997". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.744960 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebfb952d-e5d5-4ce8-9eb7-49f058023970" (UID: "ebfb952d-e5d5-4ce8-9eb7-49f058023970"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.745520 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory" (OuterVolumeSpecName: "inventory") pod "ebfb952d-e5d5-4ce8-9eb7-49f058023970" (UID: "ebfb952d-e5d5-4ce8-9eb7-49f058023970"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.812506 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.812542 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jd997\" (UniqueName: \"kubernetes.io/projected/ebfb952d-e5d5-4ce8-9eb7-49f058023970-kube-api-access-jd997\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:28 crc kubenswrapper[4729]: I0127 14:48:28.812625 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb952d-e5d5-4ce8-9eb7-49f058023970-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.034779 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" event={"ID":"ebfb952d-e5d5-4ce8-9eb7-49f058023970","Type":"ContainerDied","Data":"7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a"} Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.034990 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bec9fcc63c6151af78229e989de1a624355cb3d94b6e243e0173d8dc434909a" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.035062 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7dfxr" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.123023 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t"] Jan 27 14:48:29 crc kubenswrapper[4729]: E0127 14:48:29.123973 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfb952d-e5d5-4ce8-9eb7-49f058023970" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.123998 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfb952d-e5d5-4ce8-9eb7-49f058023970" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.124308 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebfb952d-e5d5-4ce8-9eb7-49f058023970" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.125284 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.128450 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.128659 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.128795 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.128848 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.139966 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t"] Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.224330 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtgx\" (UniqueName: \"kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.224385 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.224702 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.326922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtgx\" (UniqueName: \"kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.326991 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.327182 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.332272 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.338431 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.345257 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtgx\" (UniqueName: \"kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:29 crc kubenswrapper[4729]: I0127 14:48:29.482729 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:48:30 crc kubenswrapper[4729]: I0127 14:48:30.001073 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t"] Jan 27 14:48:30 crc kubenswrapper[4729]: I0127 14:48:30.044901 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" event={"ID":"6cc5ced3-d419-4224-a474-bd34874d18dc","Type":"ContainerStarted","Data":"3a1b3fc1ecad1a1426c7584f995f25bdcc21fa4acf502d41ffb3ba954597ed24"} Jan 27 14:48:31 crc kubenswrapper[4729]: I0127 14:48:31.059318 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" event={"ID":"6cc5ced3-d419-4224-a474-bd34874d18dc","Type":"ContainerStarted","Data":"15de6366f01b06c12b82e63c13a28f598143f83fff598c75570a7ac9f138c1f5"} Jan 27 14:48:31 crc kubenswrapper[4729]: I0127 14:48:31.091402 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" podStartSLOduration=1.474659517 podStartE2EDuration="2.091383283s" podCreationTimestamp="2026-01-27 14:48:29 +0000 UTC" firstStartedPulling="2026-01-27 14:48:30.006794968 +0000 UTC m=+2596.590985972" lastFinishedPulling="2026-01-27 14:48:30.623518734 +0000 UTC m=+2597.207709738" observedRunningTime="2026-01-27 14:48:31.082806749 +0000 UTC m=+2597.666997783" watchObservedRunningTime="2026-01-27 14:48:31.091383283 +0000 UTC m=+2597.675574277" Jan 27 14:48:40 crc kubenswrapper[4729]: I0127 14:48:40.696182 4729 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-ww9b5 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:48:40 crc kubenswrapper[4729]: I0127 14:48:40.696208 4729 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-ww9b5 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 14:48:40 crc kubenswrapper[4729]: I0127 14:48:40.696756 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" podUID="a4ee6022-b5c2-4ec1-8b01-c63b538c3c13" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:48:40 crc kubenswrapper[4729]: I0127 14:48:40.696787 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-ww9b5" podUID="a4ee6022-b5c2-4ec1-8b01-c63b538c3c13" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.77:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 14:48:43 crc kubenswrapper[4729]: I0127 14:48:43.752113 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-k2h86" podUID="2069295e-9cb7-458a-b4f6-4f569b6e6a8e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 14:48:45 crc kubenswrapper[4729]: I0127 14:48:45.839675 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="97bf3a8e-2abb-4659-9719-fdffb80a92b1" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 27 14:49:10 crc kubenswrapper[4729]: I0127 14:49:10.045526 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-t7d94"] Jan 27 14:49:10 crc kubenswrapper[4729]: I0127 14:49:10.065430 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-t7d94"] Jan 27 14:49:12 crc kubenswrapper[4729]: I0127 14:49:12.067068 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2272f0db-3c4c-44f6-97a7-685b8c9fd1c5" path="/var/lib/kubelet/pods/2272f0db-3c4c-44f6-97a7-685b8c9fd1c5/volumes" Jan 27 14:49:12 crc kubenswrapper[4729]: I0127 14:49:12.280055 4729 scope.go:117] "RemoveContainer" containerID="7336c7a1d11614f6671f6a199ad8a8445e0197d363906e6ab6590b2cbcc38100" Jan 27 14:49:25 crc kubenswrapper[4729]: I0127 14:49:25.653044 4729 generic.go:334] "Generic (PLEG): container finished" podID="6cc5ced3-d419-4224-a474-bd34874d18dc" containerID="15de6366f01b06c12b82e63c13a28f598143f83fff598c75570a7ac9f138c1f5" exitCode=0 Jan 27 14:49:25 crc kubenswrapper[4729]: I0127 14:49:25.653563 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" event={"ID":"6cc5ced3-d419-4224-a474-bd34874d18dc","Type":"ContainerDied","Data":"15de6366f01b06c12b82e63c13a28f598143f83fff598c75570a7ac9f138c1f5"} Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.149709 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.316566 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam\") pod \"6cc5ced3-d419-4224-a474-bd34874d18dc\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.316968 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory\") pod \"6cc5ced3-d419-4224-a474-bd34874d18dc\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.317010 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwtgx\" (UniqueName: \"kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx\") pod \"6cc5ced3-d419-4224-a474-bd34874d18dc\" (UID: \"6cc5ced3-d419-4224-a474-bd34874d18dc\") " Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.322493 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx" (OuterVolumeSpecName: "kube-api-access-dwtgx") pod "6cc5ced3-d419-4224-a474-bd34874d18dc" (UID: "6cc5ced3-d419-4224-a474-bd34874d18dc"). InnerVolumeSpecName "kube-api-access-dwtgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.349621 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory" (OuterVolumeSpecName: "inventory") pod "6cc5ced3-d419-4224-a474-bd34874d18dc" (UID: "6cc5ced3-d419-4224-a474-bd34874d18dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.359115 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6cc5ced3-d419-4224-a474-bd34874d18dc" (UID: "6cc5ced3-d419-4224-a474-bd34874d18dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.418949 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.418998 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cc5ced3-d419-4224-a474-bd34874d18dc-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.419009 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwtgx\" (UniqueName: \"kubernetes.io/projected/6cc5ced3-d419-4224-a474-bd34874d18dc-kube-api-access-dwtgx\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.679204 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" event={"ID":"6cc5ced3-d419-4224-a474-bd34874d18dc","Type":"ContainerDied","Data":"3a1b3fc1ecad1a1426c7584f995f25bdcc21fa4acf502d41ffb3ba954597ed24"} Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.679252 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a1b3fc1ecad1a1426c7584f995f25bdcc21fa4acf502d41ffb3ba954597ed24" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.679319 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.768089 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2nstj"] Jan 27 14:49:27 crc kubenswrapper[4729]: E0127 14:49:27.768613 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc5ced3-d419-4224-a474-bd34874d18dc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.768635 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc5ced3-d419-4224-a474-bd34874d18dc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.768864 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc5ced3-d419-4224-a474-bd34874d18dc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.769646 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.772356 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.772604 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.772719 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.774790 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.781679 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2nstj"] Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.829236 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.829306 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.829344 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v674n\" (UniqueName: \"kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.931392 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.931463 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.931503 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v674n\" (UniqueName: \"kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.935914 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.948522 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:27 crc kubenswrapper[4729]: I0127 14:49:27.950952 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v674n\" (UniqueName: \"kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n\") pod \"ssh-known-hosts-edpm-deployment-2nstj\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:28 crc kubenswrapper[4729]: I0127 14:49:28.117593 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:28 crc kubenswrapper[4729]: I0127 14:49:28.790661 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-2nstj"] Jan 27 14:49:29 crc kubenswrapper[4729]: I0127 14:49:29.702586 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" event={"ID":"27dd40db-176b-45b4-a886-967fcb9ce2df","Type":"ContainerStarted","Data":"2b19a1b9682ca266d1c0b39575465f872ff6aabe00f2338608da04a4e25fd624"} Jan 27 14:49:29 crc kubenswrapper[4729]: I0127 14:49:29.702975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" event={"ID":"27dd40db-176b-45b4-a886-967fcb9ce2df","Type":"ContainerStarted","Data":"ad7da9731cd6c708435ddcf5ee0972db3fb340c7d4a519fc0007a714877455bf"} Jan 27 14:49:29 crc kubenswrapper[4729]: I0127 14:49:29.728674 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" podStartSLOduration=2.312981815 podStartE2EDuration="2.728655292s" podCreationTimestamp="2026-01-27 14:49:27 +0000 UTC" firstStartedPulling="2026-01-27 14:49:28.784714438 +0000 UTC m=+2655.368905442" lastFinishedPulling="2026-01-27 14:49:29.200387915 +0000 UTC m=+2655.784578919" observedRunningTime="2026-01-27 14:49:29.721059102 +0000 UTC m=+2656.305250106" watchObservedRunningTime="2026-01-27 14:49:29.728655292 +0000 UTC m=+2656.312846296" Jan 27 14:49:37 crc kubenswrapper[4729]: I0127 14:49:37.788433 4729 generic.go:334] "Generic (PLEG): container finished" podID="27dd40db-176b-45b4-a886-967fcb9ce2df" containerID="2b19a1b9682ca266d1c0b39575465f872ff6aabe00f2338608da04a4e25fd624" exitCode=0 Jan 27 14:49:37 crc kubenswrapper[4729]: I0127 14:49:37.788553 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" event={"ID":"27dd40db-176b-45b4-a886-967fcb9ce2df","Type":"ContainerDied","Data":"2b19a1b9682ca266d1c0b39575465f872ff6aabe00f2338608da04a4e25fd624"} Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.521444 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.651530 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0\") pod \"27dd40db-176b-45b4-a886-967fcb9ce2df\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.651632 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam\") pod \"27dd40db-176b-45b4-a886-967fcb9ce2df\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.651706 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v674n\" (UniqueName: \"kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n\") pod \"27dd40db-176b-45b4-a886-967fcb9ce2df\" (UID: \"27dd40db-176b-45b4-a886-967fcb9ce2df\") " Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.662451 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n" (OuterVolumeSpecName: "kube-api-access-v674n") pod "27dd40db-176b-45b4-a886-967fcb9ce2df" (UID: "27dd40db-176b-45b4-a886-967fcb9ce2df"). InnerVolumeSpecName "kube-api-access-v674n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.696437 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "27dd40db-176b-45b4-a886-967fcb9ce2df" (UID: "27dd40db-176b-45b4-a886-967fcb9ce2df"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.698538 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27dd40db-176b-45b4-a886-967fcb9ce2df" (UID: "27dd40db-176b-45b4-a886-967fcb9ce2df"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.754394 4729 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.754794 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27dd40db-176b-45b4-a886-967fcb9ce2df-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.754808 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v674n\" (UniqueName: \"kubernetes.io/projected/27dd40db-176b-45b4-a886-967fcb9ce2df-kube-api-access-v674n\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.819673 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" event={"ID":"27dd40db-176b-45b4-a886-967fcb9ce2df","Type":"ContainerDied","Data":"ad7da9731cd6c708435ddcf5ee0972db3fb340c7d4a519fc0007a714877455bf"} Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.819757 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7da9731cd6c708435ddcf5ee0972db3fb340c7d4a519fc0007a714877455bf" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.819712 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-2nstj" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.918845 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w"] Jan 27 14:49:39 crc kubenswrapper[4729]: E0127 14:49:39.919668 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dd40db-176b-45b4-a886-967fcb9ce2df" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.919754 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dd40db-176b-45b4-a886-967fcb9ce2df" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.920161 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dd40db-176b-45b4-a886-967fcb9ce2df" containerName="ssh-known-hosts-edpm-deployment" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.921394 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.926580 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.926944 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.927152 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.927302 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:49:39 crc kubenswrapper[4729]: I0127 14:49:39.939134 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w"] Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.062441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lctc\" (UniqueName: \"kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.062518 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.062654 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.164800 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.165145 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lctc\" (UniqueName: \"kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.165204 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.174173 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.190656 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.212637 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lctc\" (UniqueName: \"kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qz44w\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.266746 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:40 crc kubenswrapper[4729]: I0127 14:49:40.912983 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w"] Jan 27 14:49:41 crc kubenswrapper[4729]: I0127 14:49:41.845181 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" event={"ID":"af66e59e-8967-4730-a4ca-9ff115554d5b","Type":"ContainerStarted","Data":"af7fc5a55fff5775a6ad07ca8a63aacf23d751a6bed57098be0f2e5ca46b834f"} Jan 27 14:49:41 crc kubenswrapper[4729]: I0127 14:49:41.845783 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" event={"ID":"af66e59e-8967-4730-a4ca-9ff115554d5b","Type":"ContainerStarted","Data":"75aa7456022acfcb3118baafa3a4458f2da647f9472479fdb3267a18516c8bfe"} Jan 27 14:49:41 crc kubenswrapper[4729]: I0127 14:49:41.871601 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" podStartSLOduration=2.440407561 podStartE2EDuration="2.871576184s" podCreationTimestamp="2026-01-27 14:49:39 +0000 UTC" firstStartedPulling="2026-01-27 14:49:40.917739402 +0000 UTC m=+2667.501930406" lastFinishedPulling="2026-01-27 14:49:41.348908025 +0000 UTC m=+2667.933099029" observedRunningTime="2026-01-27 14:49:41.86460796 +0000 UTC m=+2668.448798964" watchObservedRunningTime="2026-01-27 14:49:41.871576184 +0000 UTC m=+2668.455767208" Jan 27 14:49:49 crc kubenswrapper[4729]: I0127 14:49:49.937815 4729 generic.go:334] "Generic (PLEG): container finished" podID="af66e59e-8967-4730-a4ca-9ff115554d5b" containerID="af7fc5a55fff5775a6ad07ca8a63aacf23d751a6bed57098be0f2e5ca46b834f" exitCode=0 Jan 27 14:49:49 crc kubenswrapper[4729]: I0127 14:49:49.937932 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" event={"ID":"af66e59e-8967-4730-a4ca-9ff115554d5b","Type":"ContainerDied","Data":"af7fc5a55fff5775a6ad07ca8a63aacf23d751a6bed57098be0f2e5ca46b834f"} Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.460537 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.600719 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam\") pod \"af66e59e-8967-4730-a4ca-9ff115554d5b\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.600847 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lctc\" (UniqueName: \"kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc\") pod \"af66e59e-8967-4730-a4ca-9ff115554d5b\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.601029 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory\") pod \"af66e59e-8967-4730-a4ca-9ff115554d5b\" (UID: \"af66e59e-8967-4730-a4ca-9ff115554d5b\") " Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.606400 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc" (OuterVolumeSpecName: "kube-api-access-8lctc") pod "af66e59e-8967-4730-a4ca-9ff115554d5b" (UID: "af66e59e-8967-4730-a4ca-9ff115554d5b"). InnerVolumeSpecName "kube-api-access-8lctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.637248 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af66e59e-8967-4730-a4ca-9ff115554d5b" (UID: "af66e59e-8967-4730-a4ca-9ff115554d5b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.653261 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory" (OuterVolumeSpecName: "inventory") pod "af66e59e-8967-4730-a4ca-9ff115554d5b" (UID: "af66e59e-8967-4730-a4ca-9ff115554d5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.704043 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.704088 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lctc\" (UniqueName: \"kubernetes.io/projected/af66e59e-8967-4730-a4ca-9ff115554d5b-kube-api-access-8lctc\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.704101 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af66e59e-8967-4730-a4ca-9ff115554d5b-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.964701 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" event={"ID":"af66e59e-8967-4730-a4ca-9ff115554d5b","Type":"ContainerDied","Data":"75aa7456022acfcb3118baafa3a4458f2da647f9472479fdb3267a18516c8bfe"} Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.964762 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75aa7456022acfcb3118baafa3a4458f2da647f9472479fdb3267a18516c8bfe" Jan 27 14:49:51 crc kubenswrapper[4729]: I0127 14:49:51.964843 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qz44w" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.030216 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t"] Jan 27 14:49:52 crc kubenswrapper[4729]: E0127 14:49:52.030804 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af66e59e-8967-4730-a4ca-9ff115554d5b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.030822 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="af66e59e-8967-4730-a4ca-9ff115554d5b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.031188 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="af66e59e-8967-4730-a4ca-9ff115554d5b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.032132 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.033990 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.034936 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.034955 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.035406 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.046028 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t"] Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.114328 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.114850 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.114969 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfxc\" (UniqueName: \"kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.217358 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.217564 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfxc\" (UniqueName: \"kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.217852 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.222549 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.222575 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.239662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfxc\" (UniqueName: \"kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.366111 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.655829 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.656252 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.910671 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t"] Jan 27 14:49:52 crc kubenswrapper[4729]: I0127 14:49:52.994586 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" event={"ID":"0860b8dc-10f3-41e7-8f6e-231f28f3cea6","Type":"ContainerStarted","Data":"9ac1bde31337facfc426377912bd073c1db99966cfe466ec7c8d16d8c40c9008"} Jan 27 14:49:54 crc kubenswrapper[4729]: I0127 14:49:54.008185 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" event={"ID":"0860b8dc-10f3-41e7-8f6e-231f28f3cea6","Type":"ContainerStarted","Data":"79f1068a87bc768385df3ce76bb311bd1b8bf39449bc5cdc21e1384eb6710e6f"} Jan 27 14:49:54 crc kubenswrapper[4729]: I0127 14:49:54.031575 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" podStartSLOduration=1.346841797 podStartE2EDuration="2.031510259s" podCreationTimestamp="2026-01-27 14:49:52 +0000 UTC" firstStartedPulling="2026-01-27 14:49:52.91909084 +0000 UTC m=+2679.503281844" lastFinishedPulling="2026-01-27 14:49:53.603759302 +0000 UTC m=+2680.187950306" observedRunningTime="2026-01-27 14:49:54.022048134 +0000 UTC m=+2680.606239128" watchObservedRunningTime="2026-01-27 14:49:54.031510259 +0000 UTC m=+2680.615701283" Jan 27 14:49:57 crc kubenswrapper[4729]: I0127 14:49:57.073063 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-m7xd4"] Jan 27 14:49:57 crc kubenswrapper[4729]: I0127 14:49:57.089794 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-m7xd4"] Jan 27 14:49:58 crc kubenswrapper[4729]: I0127 14:49:58.063928 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0aeb60-0855-486a-b23d-790410009406" path="/var/lib/kubelet/pods/ac0aeb60-0855-486a-b23d-790410009406/volumes" Jan 27 14:50:04 crc kubenswrapper[4729]: I0127 14:50:04.543351 4729 generic.go:334] "Generic (PLEG): container finished" podID="0860b8dc-10f3-41e7-8f6e-231f28f3cea6" containerID="79f1068a87bc768385df3ce76bb311bd1b8bf39449bc5cdc21e1384eb6710e6f" exitCode=0 Jan 27 14:50:04 crc kubenswrapper[4729]: I0127 14:50:04.543420 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" event={"ID":"0860b8dc-10f3-41e7-8f6e-231f28f3cea6","Type":"ContainerDied","Data":"79f1068a87bc768385df3ce76bb311bd1b8bf39449bc5cdc21e1384eb6710e6f"} Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.114632 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.294682 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfxc\" (UniqueName: \"kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc\") pod \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.294810 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory\") pod \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.294958 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam\") pod \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\" (UID: \"0860b8dc-10f3-41e7-8f6e-231f28f3cea6\") " Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.302568 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc" (OuterVolumeSpecName: "kube-api-access-6dfxc") pod "0860b8dc-10f3-41e7-8f6e-231f28f3cea6" (UID: "0860b8dc-10f3-41e7-8f6e-231f28f3cea6"). InnerVolumeSpecName "kube-api-access-6dfxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.330102 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0860b8dc-10f3-41e7-8f6e-231f28f3cea6" (UID: "0860b8dc-10f3-41e7-8f6e-231f28f3cea6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.331636 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory" (OuterVolumeSpecName: "inventory") pod "0860b8dc-10f3-41e7-8f6e-231f28f3cea6" (UID: "0860b8dc-10f3-41e7-8f6e-231f28f3cea6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.398297 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfxc\" (UniqueName: \"kubernetes.io/projected/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-kube-api-access-6dfxc\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.398346 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.398360 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0860b8dc-10f3-41e7-8f6e-231f28f3cea6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.570646 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" event={"ID":"0860b8dc-10f3-41e7-8f6e-231f28f3cea6","Type":"ContainerDied","Data":"9ac1bde31337facfc426377912bd073c1db99966cfe466ec7c8d16d8c40c9008"} Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.570685 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.570701 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac1bde31337facfc426377912bd073c1db99966cfe466ec7c8d16d8c40c9008" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.688342 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr"] Jan 27 14:50:06 crc kubenswrapper[4729]: E0127 14:50:06.688889 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0860b8dc-10f3-41e7-8f6e-231f28f3cea6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.688908 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="0860b8dc-10f3-41e7-8f6e-231f28f3cea6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.689162 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="0860b8dc-10f3-41e7-8f6e-231f28f3cea6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.690039 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.692454 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.692570 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.692768 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.692804 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.692992 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.693110 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.693220 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.695835 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.695911 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.722232 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr"] Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808392 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808537 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808568 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808605 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808635 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808663 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808689 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808752 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808801 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808824 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbw5w\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808862 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808913 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808941 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.808966 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.809007 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.911722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912299 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912335 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbw5w\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912403 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912450 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912479 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912507 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912569 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912697 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912948 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.912984 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.913044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.913088 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.913129 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.913169 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.913224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.918448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.918530 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.918590 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.919900 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.920249 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.921246 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.921497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.921511 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.921989 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.922358 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.923820 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.924189 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.924534 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.932745 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.933197 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:06 crc kubenswrapper[4729]: I0127 14:50:06.934766 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbw5w\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:07 crc kubenswrapper[4729]: I0127 14:50:07.024283 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:07 crc kubenswrapper[4729]: I0127 14:50:07.574983 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr"] Jan 27 14:50:08 crc kubenswrapper[4729]: I0127 14:50:08.593423 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" event={"ID":"2121d941-3524-4b71-ac16-41f4679e3525","Type":"ContainerStarted","Data":"35cabcb42fa3f1a612faab608b52efbfba58c654ac2375f46dbacd14754dced9"} Jan 27 14:50:08 crc kubenswrapper[4729]: I0127 14:50:08.595027 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" event={"ID":"2121d941-3524-4b71-ac16-41f4679e3525","Type":"ContainerStarted","Data":"870994eb13744f6b0c3ee93aa383c552a173de072dde4a6f8ed5b6ed709f5f26"} Jan 27 14:50:08 crc kubenswrapper[4729]: I0127 14:50:08.624926 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" podStartSLOduration=2.163956795 podStartE2EDuration="2.624905543s" podCreationTimestamp="2026-01-27 14:50:06 +0000 UTC" firstStartedPulling="2026-01-27 14:50:07.587891965 +0000 UTC m=+2694.172082969" lastFinishedPulling="2026-01-27 14:50:08.048840713 +0000 UTC m=+2694.633031717" observedRunningTime="2026-01-27 14:50:08.616051182 +0000 UTC m=+2695.200242206" watchObservedRunningTime="2026-01-27 14:50:08.624905543 +0000 UTC m=+2695.209096567" Jan 27 14:50:12 crc kubenswrapper[4729]: I0127 14:50:12.357261 4729 scope.go:117] "RemoveContainer" containerID="5df90376480176487af8b7547bceb7121703a187a5682c983c6d34fc9c499804" Jan 27 14:50:22 crc kubenswrapper[4729]: I0127 14:50:22.654946 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:50:22 crc kubenswrapper[4729]: I0127 14:50:22.655479 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.733443 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.736325 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.762109 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.841834 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.842241 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.842279 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkw7\" (UniqueName: \"kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.944926 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.945001 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gkw7\" (UniqueName: \"kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.945271 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.945437 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.945764 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:37 crc kubenswrapper[4729]: I0127 14:50:37.968407 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gkw7\" (UniqueName: \"kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7\") pod \"certified-operators-66jqm\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:38 crc kubenswrapper[4729]: I0127 14:50:38.072709 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:38 crc kubenswrapper[4729]: I0127 14:50:38.657678 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:50:38 crc kubenswrapper[4729]: I0127 14:50:38.901647 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerStarted","Data":"087cd0a0b8a3342a4320cf6dc295b3917d42c00ab80b5cbe93d8f8258c80cfcb"} Jan 27 14:50:39 crc kubenswrapper[4729]: I0127 14:50:39.917231 4729 generic.go:334] "Generic (PLEG): container finished" podID="2d269691-298e-4e8a-8d98-39d020545313" containerID="09aa4965b6645361b678261a2cdfa42e06f4aa9250085be45f4a3547c9c2e096" exitCode=0 Jan 27 14:50:39 crc kubenswrapper[4729]: I0127 14:50:39.917300 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerDied","Data":"09aa4965b6645361b678261a2cdfa42e06f4aa9250085be45f4a3547c9c2e096"} Jan 27 14:50:41 crc kubenswrapper[4729]: I0127 14:50:41.945160 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerStarted","Data":"6b640f41a00840e0ad26f05f05e3392437ef6250a4fd647eef49e7f86e6c4d8d"} Jan 27 14:50:46 crc kubenswrapper[4729]: I0127 14:50:46.004237 4729 generic.go:334] "Generic (PLEG): container finished" podID="2d269691-298e-4e8a-8d98-39d020545313" containerID="6b640f41a00840e0ad26f05f05e3392437ef6250a4fd647eef49e7f86e6c4d8d" exitCode=0 Jan 27 14:50:46 crc kubenswrapper[4729]: I0127 14:50:46.004436 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerDied","Data":"6b640f41a00840e0ad26f05f05e3392437ef6250a4fd647eef49e7f86e6c4d8d"} Jan 27 14:50:48 crc kubenswrapper[4729]: I0127 14:50:48.031803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerStarted","Data":"17bd91655021a3ac6c84c6bb059ee7984569584681b9d0298053be3b05a2532a"} Jan 27 14:50:48 crc kubenswrapper[4729]: I0127 14:50:48.062254 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-66jqm" podStartSLOduration=4.234536669 podStartE2EDuration="11.062231602s" podCreationTimestamp="2026-01-27 14:50:37 +0000 UTC" firstStartedPulling="2026-01-27 14:50:39.920232959 +0000 UTC m=+2726.504423963" lastFinishedPulling="2026-01-27 14:50:46.747927892 +0000 UTC m=+2733.332118896" observedRunningTime="2026-01-27 14:50:48.056854178 +0000 UTC m=+2734.641045202" watchObservedRunningTime="2026-01-27 14:50:48.062231602 +0000 UTC m=+2734.646422606" Jan 27 14:50:48 crc kubenswrapper[4729]: I0127 14:50:48.074824 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:48 crc kubenswrapper[4729]: I0127 14:50:48.075040 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:49 crc kubenswrapper[4729]: I0127 14:50:49.134448 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-66jqm" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="registry-server" probeResult="failure" output=< Jan 27 14:50:49 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:50:49 crc kubenswrapper[4729]: > Jan 27 14:50:52 crc kubenswrapper[4729]: I0127 14:50:52.655598 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:50:52 crc kubenswrapper[4729]: I0127 14:50:52.656288 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:50:52 crc kubenswrapper[4729]: I0127 14:50:52.656359 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:50:52 crc kubenswrapper[4729]: I0127 14:50:52.657340 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:50:52 crc kubenswrapper[4729]: I0127 14:50:52.657401 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d" gracePeriod=600 Jan 27 14:50:53 crc kubenswrapper[4729]: I0127 14:50:53.093164 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d" exitCode=0 Jan 27 14:50:53 crc kubenswrapper[4729]: I0127 14:50:53.093215 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d"} Jan 27 14:50:53 crc kubenswrapper[4729]: I0127 14:50:53.093255 4729 scope.go:117] "RemoveContainer" containerID="69d65796fc5ad72a1671f378ba736caae90967bec0492057001edc09f3bb4d42" Jan 27 14:50:54 crc kubenswrapper[4729]: I0127 14:50:54.106387 4729 generic.go:334] "Generic (PLEG): container finished" podID="2121d941-3524-4b71-ac16-41f4679e3525" containerID="35cabcb42fa3f1a612faab608b52efbfba58c654ac2375f46dbacd14754dced9" exitCode=0 Jan 27 14:50:54 crc kubenswrapper[4729]: I0127 14:50:54.106545 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" event={"ID":"2121d941-3524-4b71-ac16-41f4679e3525","Type":"ContainerDied","Data":"35cabcb42fa3f1a612faab608b52efbfba58c654ac2375f46dbacd14754dced9"} Jan 27 14:50:54 crc kubenswrapper[4729]: I0127 14:50:54.110404 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced"} Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.744283 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906167 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906269 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906301 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906329 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906412 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906481 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906553 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906626 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906645 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906671 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906715 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906735 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906828 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906847 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.906936 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbw5w\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w\") pod \"2121d941-3524-4b71-ac16-41f4679e3525\" (UID: \"2121d941-3524-4b71-ac16-41f4679e3525\") " Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.914562 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.914700 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.914999 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.915727 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.916766 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.919375 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.919992 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.920746 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w" (OuterVolumeSpecName: "kube-api-access-xbw5w") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "kube-api-access-xbw5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.922710 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.923194 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.923126 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.924678 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.930159 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.930529 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.956247 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:55 crc kubenswrapper[4729]: I0127 14:50:55.958770 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory" (OuterVolumeSpecName: "inventory") pod "2121d941-3524-4b71-ac16-41f4679e3525" (UID: "2121d941-3524-4b71-ac16-41f4679e3525"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.010170 4729 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.010864 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.010989 4729 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011063 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011138 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011211 4729 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011279 4729 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011354 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011431 4729 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011500 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011581 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011655 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbw5w\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-kube-api-access-xbw5w\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011725 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011800 4729 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011892 4729 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2121d941-3524-4b71-ac16-41f4679e3525-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.011980 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2121d941-3524-4b71-ac16-41f4679e3525-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.131397 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.131605 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr" event={"ID":"2121d941-3524-4b71-ac16-41f4679e3525","Type":"ContainerDied","Data":"870994eb13744f6b0c3ee93aa383c552a173de072dde4a6f8ed5b6ed709f5f26"} Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.131691 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870994eb13744f6b0c3ee93aa383c552a173de072dde4a6f8ed5b6ed709f5f26" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.329653 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj"] Jan 27 14:50:56 crc kubenswrapper[4729]: E0127 14:50:56.330200 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2121d941-3524-4b71-ac16-41f4679e3525" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.330218 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2121d941-3524-4b71-ac16-41f4679e3525" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.330445 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2121d941-3524-4b71-ac16-41f4679e3525" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.331233 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.334763 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.335506 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.348295 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.348513 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.348624 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.384270 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj"] Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.423447 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.423529 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.423595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.423623 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.423774 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqgq2\" (UniqueName: \"kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.526993 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqgq2\" (UniqueName: \"kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.527161 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.527194 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.527240 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.527269 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.528644 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.531349 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.531674 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.532385 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.547694 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqgq2\" (UniqueName: \"kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mqwsj\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:56 crc kubenswrapper[4729]: I0127 14:50:56.655054 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:50:57 crc kubenswrapper[4729]: I0127 14:50:57.263127 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj"] Jan 27 14:50:58 crc kubenswrapper[4729]: I0127 14:50:58.123702 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:58 crc kubenswrapper[4729]: I0127 14:50:58.161895 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" event={"ID":"78f36cea-77c5-44dd-9952-6392811d2d40","Type":"ContainerStarted","Data":"55cceecf3b23dcd801166ee674921d00879dc4adfbbd20cabcbeb0f325343e23"} Jan 27 14:50:58 crc kubenswrapper[4729]: I0127 14:50:58.182465 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:50:58 crc kubenswrapper[4729]: I0127 14:50:58.370982 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:50:59 crc kubenswrapper[4729]: I0127 14:50:59.173599 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" event={"ID":"78f36cea-77c5-44dd-9952-6392811d2d40","Type":"ContainerStarted","Data":"07fdfb596a6c0bcf850391a5b95b6738ac80f010c563af11e9a863ad7abd345f"} Jan 27 14:50:59 crc kubenswrapper[4729]: I0127 14:50:59.173845 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-66jqm" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="registry-server" containerID="cri-o://17bd91655021a3ac6c84c6bb059ee7984569584681b9d0298053be3b05a2532a" gracePeriod=2 Jan 27 14:50:59 crc kubenswrapper[4729]: I0127 14:50:59.201175 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" podStartSLOduration=2.202150391 podStartE2EDuration="3.20115091s" podCreationTimestamp="2026-01-27 14:50:56 +0000 UTC" firstStartedPulling="2026-01-27 14:50:57.281262502 +0000 UTC m=+2743.865453506" lastFinishedPulling="2026-01-27 14:50:58.280263021 +0000 UTC m=+2744.864454025" observedRunningTime="2026-01-27 14:50:59.193357196 +0000 UTC m=+2745.777548200" watchObservedRunningTime="2026-01-27 14:50:59.20115091 +0000 UTC m=+2745.785341924" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.280182 4729 generic.go:334] "Generic (PLEG): container finished" podID="2d269691-298e-4e8a-8d98-39d020545313" containerID="17bd91655021a3ac6c84c6bb059ee7984569584681b9d0298053be3b05a2532a" exitCode=0 Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.282031 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerDied","Data":"17bd91655021a3ac6c84c6bb059ee7984569584681b9d0298053be3b05a2532a"} Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.520983 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.637500 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities\") pod \"2d269691-298e-4e8a-8d98-39d020545313\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.637613 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gkw7\" (UniqueName: \"kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7\") pod \"2d269691-298e-4e8a-8d98-39d020545313\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.637641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content\") pod \"2d269691-298e-4e8a-8d98-39d020545313\" (UID: \"2d269691-298e-4e8a-8d98-39d020545313\") " Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.638572 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities" (OuterVolumeSpecName: "utilities") pod "2d269691-298e-4e8a-8d98-39d020545313" (UID: "2d269691-298e-4e8a-8d98-39d020545313"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.639045 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.643100 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7" (OuterVolumeSpecName: "kube-api-access-8gkw7") pod "2d269691-298e-4e8a-8d98-39d020545313" (UID: "2d269691-298e-4e8a-8d98-39d020545313"). InnerVolumeSpecName "kube-api-access-8gkw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.699068 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d269691-298e-4e8a-8d98-39d020545313" (UID: "2d269691-298e-4e8a-8d98-39d020545313"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.742563 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gkw7\" (UniqueName: \"kubernetes.io/projected/2d269691-298e-4e8a-8d98-39d020545313-kube-api-access-8gkw7\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:00 crc kubenswrapper[4729]: I0127 14:51:00.742624 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d269691-298e-4e8a-8d98-39d020545313-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.294950 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-66jqm" event={"ID":"2d269691-298e-4e8a-8d98-39d020545313","Type":"ContainerDied","Data":"087cd0a0b8a3342a4320cf6dc295b3917d42c00ab80b5cbe93d8f8258c80cfcb"} Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.295086 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-66jqm" Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.295510 4729 scope.go:117] "RemoveContainer" containerID="17bd91655021a3ac6c84c6bb059ee7984569584681b9d0298053be3b05a2532a" Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.323937 4729 scope.go:117] "RemoveContainer" containerID="6b640f41a00840e0ad26f05f05e3392437ef6250a4fd647eef49e7f86e6c4d8d" Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.378820 4729 scope.go:117] "RemoveContainer" containerID="09aa4965b6645361b678261a2cdfa42e06f4aa9250085be45f4a3547c9c2e096" Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.381446 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:51:01 crc kubenswrapper[4729]: I0127 14:51:01.411006 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-66jqm"] Jan 27 14:51:02 crc kubenswrapper[4729]: I0127 14:51:02.066800 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d269691-298e-4e8a-8d98-39d020545313" path="/var/lib/kubelet/pods/2d269691-298e-4e8a-8d98-39d020545313/volumes" Jan 27 14:52:07 crc kubenswrapper[4729]: I0127 14:52:07.028492 4729 generic.go:334] "Generic (PLEG): container finished" podID="78f36cea-77c5-44dd-9952-6392811d2d40" containerID="07fdfb596a6c0bcf850391a5b95b6738ac80f010c563af11e9a863ad7abd345f" exitCode=0 Jan 27 14:52:07 crc kubenswrapper[4729]: I0127 14:52:07.028577 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" event={"ID":"78f36cea-77c5-44dd-9952-6392811d2d40","Type":"ContainerDied","Data":"07fdfb596a6c0bcf850391a5b95b6738ac80f010c563af11e9a863ad7abd345f"} Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.663711 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.741221 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam\") pod \"78f36cea-77c5-44dd-9952-6392811d2d40\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.741278 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory\") pod \"78f36cea-77c5-44dd-9952-6392811d2d40\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.741360 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqgq2\" (UniqueName: \"kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2\") pod \"78f36cea-77c5-44dd-9952-6392811d2d40\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.741400 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0\") pod \"78f36cea-77c5-44dd-9952-6392811d2d40\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.741450 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle\") pod \"78f36cea-77c5-44dd-9952-6392811d2d40\" (UID: \"78f36cea-77c5-44dd-9952-6392811d2d40\") " Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.748797 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "78f36cea-77c5-44dd-9952-6392811d2d40" (UID: "78f36cea-77c5-44dd-9952-6392811d2d40"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.759063 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2" (OuterVolumeSpecName: "kube-api-access-jqgq2") pod "78f36cea-77c5-44dd-9952-6392811d2d40" (UID: "78f36cea-77c5-44dd-9952-6392811d2d40"). InnerVolumeSpecName "kube-api-access-jqgq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.777541 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "78f36cea-77c5-44dd-9952-6392811d2d40" (UID: "78f36cea-77c5-44dd-9952-6392811d2d40"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.782673 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory" (OuterVolumeSpecName: "inventory") pod "78f36cea-77c5-44dd-9952-6392811d2d40" (UID: "78f36cea-77c5-44dd-9952-6392811d2d40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.787102 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "78f36cea-77c5-44dd-9952-6392811d2d40" (UID: "78f36cea-77c5-44dd-9952-6392811d2d40"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.846010 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.846077 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.846089 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqgq2\" (UniqueName: \"kubernetes.io/projected/78f36cea-77c5-44dd-9952-6392811d2d40-kube-api-access-jqgq2\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.846100 4729 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/78f36cea-77c5-44dd-9952-6392811d2d40-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:08 crc kubenswrapper[4729]: I0127 14:52:08.846110 4729 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78f36cea-77c5-44dd-9952-6392811d2d40-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.052009 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" event={"ID":"78f36cea-77c5-44dd-9952-6392811d2d40","Type":"ContainerDied","Data":"55cceecf3b23dcd801166ee674921d00879dc4adfbbd20cabcbeb0f325343e23"} Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.052048 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55cceecf3b23dcd801166ee674921d00879dc4adfbbd20cabcbeb0f325343e23" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.052093 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mqwsj" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.155467 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf"] Jan 27 14:52:09 crc kubenswrapper[4729]: E0127 14:52:09.156017 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f36cea-77c5-44dd-9952-6392811d2d40" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156039 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f36cea-77c5-44dd-9952-6392811d2d40" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:52:09 crc kubenswrapper[4729]: E0127 14:52:09.156088 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="registry-server" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156101 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="registry-server" Jan 27 14:52:09 crc kubenswrapper[4729]: E0127 14:52:09.156120 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="extract-utilities" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156130 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="extract-utilities" Jan 27 14:52:09 crc kubenswrapper[4729]: E0127 14:52:09.156159 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="extract-content" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156167 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="extract-content" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156446 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f36cea-77c5-44dd-9952-6392811d2d40" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.156491 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d269691-298e-4e8a-8d98-39d020545313" containerName="registry-server" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.157598 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.160836 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.161127 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.161316 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.161455 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.161580 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.161759 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.211349 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf"] Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.255243 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.255315 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.255382 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.255401 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.256247 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qpg\" (UniqueName: \"kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.256379 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.360259 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.360676 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.360705 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.362139 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qpg\" (UniqueName: \"kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.362188 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.362228 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.368102 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.368700 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.370514 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.379136 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.379661 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.386318 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qpg\" (UniqueName: \"kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:09 crc kubenswrapper[4729]: I0127 14:52:09.478824 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:52:10 crc kubenswrapper[4729]: I0127 14:52:10.105854 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf"] Jan 27 14:52:10 crc kubenswrapper[4729]: I0127 14:52:10.115814 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:52:11 crc kubenswrapper[4729]: I0127 14:52:11.078472 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" event={"ID":"067cab76-3d24-4a20-a016-0141d54181a2","Type":"ContainerStarted","Data":"468a20c9b5633fef7bab5a87b65c3e1a1295e263cfaa19df499371cb874bdd00"} Jan 27 14:52:12 crc kubenswrapper[4729]: I0127 14:52:12.093497 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" event={"ID":"067cab76-3d24-4a20-a016-0141d54181a2","Type":"ContainerStarted","Data":"8d30d1d9c0c348627796ff451a8ff5a8d7434cb935aaf969e485e8da295ef6e9"} Jan 27 14:52:12 crc kubenswrapper[4729]: I0127 14:52:12.128231 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" podStartSLOduration=2.2071113589999998 podStartE2EDuration="3.128209187s" podCreationTimestamp="2026-01-27 14:52:09 +0000 UTC" firstStartedPulling="2026-01-27 14:52:10.115493269 +0000 UTC m=+2816.699684273" lastFinishedPulling="2026-01-27 14:52:11.036591097 +0000 UTC m=+2817.620782101" observedRunningTime="2026-01-27 14:52:12.115154105 +0000 UTC m=+2818.699345129" watchObservedRunningTime="2026-01-27 14:52:12.128209187 +0000 UTC m=+2818.712400191" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.138170 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.148755 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.170614 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.280766 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.280850 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhn7\" (UniqueName: \"kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.281361 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.384610 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.384713 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhn7\" (UniqueName: \"kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.384900 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.385555 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.385840 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.413060 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhn7\" (UniqueName: \"kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7\") pod \"redhat-operators-tl2g8\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:19 crc kubenswrapper[4729]: I0127 14:52:19.483366 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:20 crc kubenswrapper[4729]: I0127 14:52:20.271028 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:52:21 crc kubenswrapper[4729]: I0127 14:52:21.210354 4729 generic.go:334] "Generic (PLEG): container finished" podID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerID="0024ea01c6480d394079232735c44ad4caa11bc1a907338c7c4cfa45e02385cd" exitCode=0 Jan 27 14:52:21 crc kubenswrapper[4729]: I0127 14:52:21.210578 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerDied","Data":"0024ea01c6480d394079232735c44ad4caa11bc1a907338c7c4cfa45e02385cd"} Jan 27 14:52:21 crc kubenswrapper[4729]: I0127 14:52:21.210662 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerStarted","Data":"dfadf9b753f89aa4c1f9ade7a73fe6694e2523a8183bd6994cdc7211aed995c6"} Jan 27 14:52:23 crc kubenswrapper[4729]: I0127 14:52:23.233079 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerStarted","Data":"faa51b2a5eb1a567a1d24df1b9d3c6006b1c2d0af3ce83309008bd5adbcaaaeb"} Jan 27 14:52:32 crc kubenswrapper[4729]: I0127 14:52:32.333066 4729 generic.go:334] "Generic (PLEG): container finished" podID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerID="faa51b2a5eb1a567a1d24df1b9d3c6006b1c2d0af3ce83309008bd5adbcaaaeb" exitCode=0 Jan 27 14:52:32 crc kubenswrapper[4729]: I0127 14:52:32.333147 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerDied","Data":"faa51b2a5eb1a567a1d24df1b9d3c6006b1c2d0af3ce83309008bd5adbcaaaeb"} Jan 27 14:52:33 crc kubenswrapper[4729]: I0127 14:52:33.345117 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerStarted","Data":"af9bea9301083cc5d1acb66f5385a69742235f4142ef4fedb7a16c947b3dc4cf"} Jan 27 14:52:33 crc kubenswrapper[4729]: I0127 14:52:33.375269 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tl2g8" podStartSLOduration=2.7408941540000002 podStartE2EDuration="14.375251098s" podCreationTimestamp="2026-01-27 14:52:19 +0000 UTC" firstStartedPulling="2026-01-27 14:52:21.213378382 +0000 UTC m=+2827.797569386" lastFinishedPulling="2026-01-27 14:52:32.847735326 +0000 UTC m=+2839.431926330" observedRunningTime="2026-01-27 14:52:33.365373042 +0000 UTC m=+2839.949564066" watchObservedRunningTime="2026-01-27 14:52:33.375251098 +0000 UTC m=+2839.959442102" Jan 27 14:52:39 crc kubenswrapper[4729]: I0127 14:52:39.484292 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:39 crc kubenswrapper[4729]: I0127 14:52:39.484935 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:52:40 crc kubenswrapper[4729]: I0127 14:52:40.533353 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:52:40 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:52:40 crc kubenswrapper[4729]: > Jan 27 14:52:50 crc kubenswrapper[4729]: I0127 14:52:50.546561 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:52:50 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:52:50 crc kubenswrapper[4729]: > Jan 27 14:53:00 crc kubenswrapper[4729]: I0127 14:53:00.541020 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:00 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:00 crc kubenswrapper[4729]: > Jan 27 14:53:02 crc kubenswrapper[4729]: I0127 14:53:02.705337 4729 generic.go:334] "Generic (PLEG): container finished" podID="067cab76-3d24-4a20-a016-0141d54181a2" containerID="8d30d1d9c0c348627796ff451a8ff5a8d7434cb935aaf969e485e8da295ef6e9" exitCode=0 Jan 27 14:53:02 crc kubenswrapper[4729]: I0127 14:53:02.705443 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" event={"ID":"067cab76-3d24-4a20-a016-0141d54181a2","Type":"ContainerDied","Data":"8d30d1d9c0c348627796ff451a8ff5a8d7434cb935aaf969e485e8da295ef6e9"} Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.323995 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.485629 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.486310 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.486374 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.486430 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.486554 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qpg\" (UniqueName: \"kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.486641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0\") pod \"067cab76-3d24-4a20-a016-0141d54181a2\" (UID: \"067cab76-3d24-4a20-a016-0141d54181a2\") " Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.495252 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg" (OuterVolumeSpecName: "kube-api-access-27qpg") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "kube-api-access-27qpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.496853 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.535133 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.535677 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory" (OuterVolumeSpecName: "inventory") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.536046 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.536271 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "067cab76-3d24-4a20-a016-0141d54181a2" (UID: "067cab76-3d24-4a20-a016-0141d54181a2"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591241 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591287 4729 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591300 4729 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591310 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qpg\" (UniqueName: \"kubernetes.io/projected/067cab76-3d24-4a20-a016-0141d54181a2-kube-api-access-27qpg\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591321 4729 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.591331 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/067cab76-3d24-4a20-a016-0141d54181a2-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.735349 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" event={"ID":"067cab76-3d24-4a20-a016-0141d54181a2","Type":"ContainerDied","Data":"468a20c9b5633fef7bab5a87b65c3e1a1295e263cfaa19df499371cb874bdd00"} Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.735404 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="468a20c9b5633fef7bab5a87b65c3e1a1295e263cfaa19df499371cb874bdd00" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.735422 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.898930 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw"] Jan 27 14:53:04 crc kubenswrapper[4729]: E0127 14:53:04.899471 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067cab76-3d24-4a20-a016-0141d54181a2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.899491 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="067cab76-3d24-4a20-a016-0141d54181a2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.899691 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="067cab76-3d24-4a20-a016-0141d54181a2" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.900677 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.907133 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.907201 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.907510 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.908748 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.914399 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw"] Jan 27 14:53:04 crc kubenswrapper[4729]: I0127 14:53:04.918748 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.002440 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.002801 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.002943 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.003255 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8jm\" (UniqueName: \"kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.003387 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.106162 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.106248 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.106475 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp8jm\" (UniqueName: \"kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.106906 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.107250 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.116960 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.116994 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.116960 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.117090 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.139099 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp8jm\" (UniqueName: \"kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.226941 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:53:05 crc kubenswrapper[4729]: I0127 14:53:05.986998 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw"] Jan 27 14:53:06 crc kubenswrapper[4729]: I0127 14:53:06.773907 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" event={"ID":"33c4c74a-3a24-43e4-94ff-84a794d0db7d","Type":"ContainerStarted","Data":"ec9914816a3ad0cac9aeb3a9c699dc433441c48f1fcddac2d7829dc9d8a6cb07"} Jan 27 14:53:07 crc kubenswrapper[4729]: I0127 14:53:07.787673 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" event={"ID":"33c4c74a-3a24-43e4-94ff-84a794d0db7d","Type":"ContainerStarted","Data":"5d5182d15ae566152e5b15dda61e6f2c9090d3fa4f65c805a76eb9854149b786"} Jan 27 14:53:07 crc kubenswrapper[4729]: I0127 14:53:07.814036 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" podStartSLOduration=3.190109565 podStartE2EDuration="3.814018021s" podCreationTimestamp="2026-01-27 14:53:04 +0000 UTC" firstStartedPulling="2026-01-27 14:53:05.959159185 +0000 UTC m=+2872.543350189" lastFinishedPulling="2026-01-27 14:53:06.583067641 +0000 UTC m=+2873.167258645" observedRunningTime="2026-01-27 14:53:07.809333276 +0000 UTC m=+2874.393524280" watchObservedRunningTime="2026-01-27 14:53:07.814018021 +0000 UTC m=+2874.398209025" Jan 27 14:53:10 crc kubenswrapper[4729]: I0127 14:53:10.543263 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:10 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:10 crc kubenswrapper[4729]: > Jan 27 14:53:20 crc kubenswrapper[4729]: I0127 14:53:20.563146 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:20 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:20 crc kubenswrapper[4729]: > Jan 27 14:53:22 crc kubenswrapper[4729]: I0127 14:53:22.655010 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:53:22 crc kubenswrapper[4729]: I0127 14:53:22.655331 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:53:30 crc kubenswrapper[4729]: I0127 14:53:30.562445 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:30 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:30 crc kubenswrapper[4729]: > Jan 27 14:53:40 crc kubenswrapper[4729]: I0127 14:53:40.536718 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:40 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:40 crc kubenswrapper[4729]: > Jan 27 14:53:50 crc kubenswrapper[4729]: I0127 14:53:50.544926 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" probeResult="failure" output=< Jan 27 14:53:50 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 14:53:50 crc kubenswrapper[4729]: > Jan 27 14:53:52 crc kubenswrapper[4729]: I0127 14:53:52.655499 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:53:52 crc kubenswrapper[4729]: I0127 14:53:52.655896 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:53:53 crc kubenswrapper[4729]: I0127 14:53:53.872288 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:53:53 crc kubenswrapper[4729]: I0127 14:53:53.875359 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:53 crc kubenswrapper[4729]: I0127 14:53:53.893421 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.021785 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm99\" (UniqueName: \"kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.021901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.021987 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.124479 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gm99\" (UniqueName: \"kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.124722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.125079 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.125598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.126295 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.158926 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gm99\" (UniqueName: \"kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99\") pod \"community-operators-f5nb9\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.208091 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:53:54 crc kubenswrapper[4729]: I0127 14:53:54.761781 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:53:55 crc kubenswrapper[4729]: I0127 14:53:55.376802 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerStarted","Data":"0be0b42c7eeb2595a0e77663fffef5f00395c80a3be848f49fa4cd59931c6a6b"} Jan 27 14:53:56 crc kubenswrapper[4729]: I0127 14:53:56.389558 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerStarted","Data":"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54"} Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.401318 4729 generic.go:334] "Generic (PLEG): container finished" podID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerID="aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54" exitCode=0 Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.401375 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerDied","Data":"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54"} Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.646915 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.649771 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.664930 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.819116 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.819221 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wgv6\" (UniqueName: \"kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.819470 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.921553 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.921674 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wgv6\" (UniqueName: \"kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.921773 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.922285 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.922370 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.956966 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wgv6\" (UniqueName: \"kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6\") pod \"redhat-marketplace-8k9pp\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:57 crc kubenswrapper[4729]: I0127 14:53:57.980220 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:53:58 crc kubenswrapper[4729]: I0127 14:53:58.636802 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:53:58 crc kubenswrapper[4729]: W0127 14:53:58.644122 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f20ae30_5f00_4fdb_a886_0c46ba1d2972.slice/crio-6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe WatchSource:0}: Error finding container 6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe: Status 404 returned error can't find the container with id 6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe Jan 27 14:53:59 crc kubenswrapper[4729]: I0127 14:53:59.448640 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerStarted","Data":"a2d09aa8a36045ecc9324a73096ba42c99cdf49b9cf234a86dc6786639de15f8"} Jan 27 14:53:59 crc kubenswrapper[4729]: I0127 14:53:59.448969 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerStarted","Data":"6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe"} Jan 27 14:53:59 crc kubenswrapper[4729]: I0127 14:53:59.537373 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:53:59 crc kubenswrapper[4729]: I0127 14:53:59.587360 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:54:00 crc kubenswrapper[4729]: I0127 14:54:00.460521 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerID="a2d09aa8a36045ecc9324a73096ba42c99cdf49b9cf234a86dc6786639de15f8" exitCode=0 Jan 27 14:54:00 crc kubenswrapper[4729]: I0127 14:54:00.460641 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerDied","Data":"a2d09aa8a36045ecc9324a73096ba42c99cdf49b9cf234a86dc6786639de15f8"} Jan 27 14:54:01 crc kubenswrapper[4729]: I0127 14:54:01.636084 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:54:01 crc kubenswrapper[4729]: I0127 14:54:01.636291 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tl2g8" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" containerID="cri-o://af9bea9301083cc5d1acb66f5385a69742235f4142ef4fedb7a16c947b3dc4cf" gracePeriod=2 Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.482982 4729 generic.go:334] "Generic (PLEG): container finished" podID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerID="af9bea9301083cc5d1acb66f5385a69742235f4142ef4fedb7a16c947b3dc4cf" exitCode=0 Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.483274 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerDied","Data":"af9bea9301083cc5d1acb66f5385a69742235f4142ef4fedb7a16c947b3dc4cf"} Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.807717 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.857579 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhn7\" (UniqueName: \"kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7\") pod \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.857656 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities\") pod \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.857687 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content\") pod \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\" (UID: \"5cd5723d-4634-4257-b18b-a23fb3b2f9f7\") " Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.858807 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities" (OuterVolumeSpecName: "utilities") pod "5cd5723d-4634-4257-b18b-a23fb3b2f9f7" (UID: "5cd5723d-4634-4257-b18b-a23fb3b2f9f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.864389 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.872709 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7" (OuterVolumeSpecName: "kube-api-access-hfhn7") pod "5cd5723d-4634-4257-b18b-a23fb3b2f9f7" (UID: "5cd5723d-4634-4257-b18b-a23fb3b2f9f7"). InnerVolumeSpecName "kube-api-access-hfhn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:02 crc kubenswrapper[4729]: I0127 14:54:02.966327 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhn7\" (UniqueName: \"kubernetes.io/projected/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-kube-api-access-hfhn7\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.141488 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cd5723d-4634-4257-b18b-a23fb3b2f9f7" (UID: "5cd5723d-4634-4257-b18b-a23fb3b2f9f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.180559 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd5723d-4634-4257-b18b-a23fb3b2f9f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.495949 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tl2g8" event={"ID":"5cd5723d-4634-4257-b18b-a23fb3b2f9f7","Type":"ContainerDied","Data":"dfadf9b753f89aa4c1f9ade7a73fe6694e2523a8183bd6994cdc7211aed995c6"} Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.496012 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tl2g8" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.496071 4729 scope.go:117] "RemoveContainer" containerID="af9bea9301083cc5d1acb66f5385a69742235f4142ef4fedb7a16c947b3dc4cf" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.533458 4729 scope.go:117] "RemoveContainer" containerID="faa51b2a5eb1a567a1d24df1b9d3c6006b1c2d0af3ce83309008bd5adbcaaaeb" Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.538397 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.552529 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tl2g8"] Jan 27 14:54:03 crc kubenswrapper[4729]: I0127 14:54:03.560199 4729 scope.go:117] "RemoveContainer" containerID="0024ea01c6480d394079232735c44ad4caa11bc1a907338c7c4cfa45e02385cd" Jan 27 14:54:04 crc kubenswrapper[4729]: I0127 14:54:04.064897 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" path="/var/lib/kubelet/pods/5cd5723d-4634-4257-b18b-a23fb3b2f9f7/volumes" Jan 27 14:54:04 crc kubenswrapper[4729]: I0127 14:54:04.719952 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerStarted","Data":"14a3da0c98322cc6f12137f776f7d29b41f449a1f787258228151ef436f39401"} Jan 27 14:54:04 crc kubenswrapper[4729]: I0127 14:54:04.722522 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerStarted","Data":"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3"} Jan 27 14:54:05 crc kubenswrapper[4729]: I0127 14:54:05.735858 4729 generic.go:334] "Generic (PLEG): container finished" podID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerID="e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3" exitCode=0 Jan 27 14:54:05 crc kubenswrapper[4729]: I0127 14:54:05.735930 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerDied","Data":"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3"} Jan 27 14:54:07 crc kubenswrapper[4729]: I0127 14:54:07.775840 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerStarted","Data":"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf"} Jan 27 14:54:07 crc kubenswrapper[4729]: I0127 14:54:07.779447 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerID="14a3da0c98322cc6f12137f776f7d29b41f449a1f787258228151ef436f39401" exitCode=0 Jan 27 14:54:07 crc kubenswrapper[4729]: I0127 14:54:07.779502 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerDied","Data":"14a3da0c98322cc6f12137f776f7d29b41f449a1f787258228151ef436f39401"} Jan 27 14:54:07 crc kubenswrapper[4729]: I0127 14:54:07.813121 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f5nb9" podStartSLOduration=5.534686814 podStartE2EDuration="14.813094013s" podCreationTimestamp="2026-01-27 14:53:53 +0000 UTC" firstStartedPulling="2026-01-27 14:53:57.405039137 +0000 UTC m=+2923.989230141" lastFinishedPulling="2026-01-27 14:54:06.683446336 +0000 UTC m=+2933.267637340" observedRunningTime="2026-01-27 14:54:07.801598284 +0000 UTC m=+2934.385789308" watchObservedRunningTime="2026-01-27 14:54:07.813094013 +0000 UTC m=+2934.397285007" Jan 27 14:54:09 crc kubenswrapper[4729]: I0127 14:54:09.807005 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerStarted","Data":"d7d8902a1aa8c396d1caa2e36fec501628eea88c6e7e0712d4478d896cda08c7"} Jan 27 14:54:09 crc kubenswrapper[4729]: I0127 14:54:09.852414 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8k9pp" podStartSLOduration=4.468468409 podStartE2EDuration="12.852388946s" podCreationTimestamp="2026-01-27 14:53:57 +0000 UTC" firstStartedPulling="2026-01-27 14:54:00.463538875 +0000 UTC m=+2927.047729879" lastFinishedPulling="2026-01-27 14:54:08.847459412 +0000 UTC m=+2935.431650416" observedRunningTime="2026-01-27 14:54:09.828806984 +0000 UTC m=+2936.412997988" watchObservedRunningTime="2026-01-27 14:54:09.852388946 +0000 UTC m=+2936.436579950" Jan 27 14:54:14 crc kubenswrapper[4729]: I0127 14:54:14.208318 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:14 crc kubenswrapper[4729]: I0127 14:54:14.209738 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:14 crc kubenswrapper[4729]: I0127 14:54:14.269343 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:14 crc kubenswrapper[4729]: I0127 14:54:14.936910 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:15 crc kubenswrapper[4729]: I0127 14:54:15.013278 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:54:16 crc kubenswrapper[4729]: I0127 14:54:16.906781 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f5nb9" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="registry-server" containerID="cri-o://a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf" gracePeriod=2 Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.866617 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.922649 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gm99\" (UniqueName: \"kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99\") pod \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.922739 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities\") pod \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.922799 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content\") pod \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\" (UID: \"a15acfa4-46b5-4a62-b983-c6f54ac4fe27\") " Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.924648 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities" (OuterVolumeSpecName: "utilities") pod "a15acfa4-46b5-4a62-b983-c6f54ac4fe27" (UID: "a15acfa4-46b5-4a62-b983-c6f54ac4fe27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.930786 4729 generic.go:334] "Generic (PLEG): container finished" podID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerID="a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf" exitCode=0 Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.930832 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerDied","Data":"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf"} Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.930862 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nb9" event={"ID":"a15acfa4-46b5-4a62-b983-c6f54ac4fe27","Type":"ContainerDied","Data":"0be0b42c7eeb2595a0e77663fffef5f00395c80a3be848f49fa4cd59931c6a6b"} Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.930955 4729 scope.go:117] "RemoveContainer" containerID="a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.931142 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nb9" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.940305 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99" (OuterVolumeSpecName: "kube-api-access-2gm99") pod "a15acfa4-46b5-4a62-b983-c6f54ac4fe27" (UID: "a15acfa4-46b5-4a62-b983-c6f54ac4fe27"). InnerVolumeSpecName "kube-api-access-2gm99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.981539 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.981604 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:17 crc kubenswrapper[4729]: I0127 14:54:17.997251 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a15acfa4-46b5-4a62-b983-c6f54ac4fe27" (UID: "a15acfa4-46b5-4a62-b983-c6f54ac4fe27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.008225 4729 scope.go:117] "RemoveContainer" containerID="e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.027026 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gm99\" (UniqueName: \"kubernetes.io/projected/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-kube-api-access-2gm99\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.027069 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.027083 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15acfa4-46b5-4a62-b983-c6f54ac4fe27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.047115 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.049566 4729 scope.go:117] "RemoveContainer" containerID="aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.111005 4729 scope.go:117] "RemoveContainer" containerID="a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf" Jan 27 14:54:18 crc kubenswrapper[4729]: E0127 14:54:18.112110 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf\": container with ID starting with a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf not found: ID does not exist" containerID="a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.112171 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf"} err="failed to get container status \"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf\": rpc error: code = NotFound desc = could not find container \"a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf\": container with ID starting with a3eacd936761a2a3f184105c12ea1f762fb0ef360594a6fa06269d6b007352bf not found: ID does not exist" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.112204 4729 scope.go:117] "RemoveContainer" containerID="e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3" Jan 27 14:54:18 crc kubenswrapper[4729]: E0127 14:54:18.112749 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3\": container with ID starting with e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3 not found: ID does not exist" containerID="e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.112779 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3"} err="failed to get container status \"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3\": rpc error: code = NotFound desc = could not find container \"e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3\": container with ID starting with e1593530cbf133d1245e3b0dcdafa50a6199e5f6dce1175a7092be340deeedc3 not found: ID does not exist" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.112796 4729 scope.go:117] "RemoveContainer" containerID="aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54" Jan 27 14:54:18 crc kubenswrapper[4729]: E0127 14:54:18.113302 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54\": container with ID starting with aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54 not found: ID does not exist" containerID="aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.113973 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54"} err="failed to get container status \"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54\": rpc error: code = NotFound desc = could not find container \"aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54\": container with ID starting with aabd3565ed798b2cfd628129644b85480a0ec71dc0580571b750899de0528f54 not found: ID does not exist" Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.260045 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:54:18 crc kubenswrapper[4729]: I0127 14:54:18.270790 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f5nb9"] Jan 27 14:54:19 crc kubenswrapper[4729]: I0127 14:54:19.017955 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:20 crc kubenswrapper[4729]: I0127 14:54:20.069579 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" path="/var/lib/kubelet/pods/a15acfa4-46b5-4a62-b983-c6f54ac4fe27/volumes" Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.445543 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.446189 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8k9pp" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="registry-server" containerID="cri-o://d7d8902a1aa8c396d1caa2e36fec501628eea88c6e7e0712d4478d896cda08c7" gracePeriod=2 Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.991670 4729 generic.go:334] "Generic (PLEG): container finished" podID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerID="d7d8902a1aa8c396d1caa2e36fec501628eea88c6e7e0712d4478d896cda08c7" exitCode=0 Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.991735 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerDied","Data":"d7d8902a1aa8c396d1caa2e36fec501628eea88c6e7e0712d4478d896cda08c7"} Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.992019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k9pp" event={"ID":"6f20ae30-5f00-4fdb-a886-0c46ba1d2972","Type":"ContainerDied","Data":"6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe"} Jan 27 14:54:21 crc kubenswrapper[4729]: I0127 14:54:21.992036 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a94dc9e10201b07b170cd80198187ab83cbd6da40213ff1b8d604748f7a99fe" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.106029 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.261407 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wgv6\" (UniqueName: \"kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6\") pod \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.261648 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities\") pod \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.261677 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content\") pod \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\" (UID: \"6f20ae30-5f00-4fdb-a886-0c46ba1d2972\") " Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.263534 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities" (OuterVolumeSpecName: "utilities") pod "6f20ae30-5f00-4fdb-a886-0c46ba1d2972" (UID: "6f20ae30-5f00-4fdb-a886-0c46ba1d2972"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.267378 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6" (OuterVolumeSpecName: "kube-api-access-4wgv6") pod "6f20ae30-5f00-4fdb-a886-0c46ba1d2972" (UID: "6f20ae30-5f00-4fdb-a886-0c46ba1d2972"). InnerVolumeSpecName "kube-api-access-4wgv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.289102 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f20ae30-5f00-4fdb-a886-0c46ba1d2972" (UID: "6f20ae30-5f00-4fdb-a886-0c46ba1d2972"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.365099 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wgv6\" (UniqueName: \"kubernetes.io/projected/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-kube-api-access-4wgv6\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.365150 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.365164 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f20ae30-5f00-4fdb-a886-0c46ba1d2972-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.655258 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.655539 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.655595 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.656896 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 14:54:22 crc kubenswrapper[4729]: I0127 14:54:22.656975 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" gracePeriod=600 Jan 27 14:54:22 crc kubenswrapper[4729]: E0127 14:54:22.787980 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.007721 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" exitCode=0 Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.007828 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k9pp" Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.007858 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced"} Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.007949 4729 scope.go:117] "RemoveContainer" containerID="bdfc1d43aba260a0c451a6e8de022510c8b3bb245d8c6242f69918d2f5967b4d" Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.009305 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:54:23 crc kubenswrapper[4729]: E0127 14:54:23.009799 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.066161 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:54:23 crc kubenswrapper[4729]: I0127 14:54:23.077641 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k9pp"] Jan 27 14:54:24 crc kubenswrapper[4729]: I0127 14:54:24.064456 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" path="/var/lib/kubelet/pods/6f20ae30-5f00-4fdb-a886-0c46ba1d2972/volumes" Jan 27 14:54:38 crc kubenswrapper[4729]: I0127 14:54:38.052064 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:54:38 crc kubenswrapper[4729]: E0127 14:54:38.052945 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:54:51 crc kubenswrapper[4729]: I0127 14:54:51.051437 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:54:51 crc kubenswrapper[4729]: E0127 14:54:51.052356 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:55:05 crc kubenswrapper[4729]: I0127 14:55:05.053790 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:55:05 crc kubenswrapper[4729]: E0127 14:55:05.056198 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:55:16 crc kubenswrapper[4729]: I0127 14:55:16.051830 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:55:16 crc kubenswrapper[4729]: E0127 14:55:16.052744 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:55:30 crc kubenswrapper[4729]: I0127 14:55:30.053539 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:55:30 crc kubenswrapper[4729]: E0127 14:55:30.055327 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:55:44 crc kubenswrapper[4729]: I0127 14:55:44.070565 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:55:44 crc kubenswrapper[4729]: E0127 14:55:44.073926 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:55:57 crc kubenswrapper[4729]: I0127 14:55:57.051611 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:55:57 crc kubenswrapper[4729]: E0127 14:55:57.052599 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:56:10 crc kubenswrapper[4729]: I0127 14:56:10.052108 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:56:10 crc kubenswrapper[4729]: E0127 14:56:10.053182 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:56:21 crc kubenswrapper[4729]: I0127 14:56:21.051382 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:56:21 crc kubenswrapper[4729]: E0127 14:56:21.052433 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:56:34 crc kubenswrapper[4729]: I0127 14:56:34.059444 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:56:34 crc kubenswrapper[4729]: E0127 14:56:34.060379 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:56:49 crc kubenswrapper[4729]: I0127 14:56:49.051787 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:56:49 crc kubenswrapper[4729]: E0127 14:56:49.054043 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:57:01 crc kubenswrapper[4729]: I0127 14:57:01.050984 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:57:01 crc kubenswrapper[4729]: E0127 14:57:01.051813 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:57:14 crc kubenswrapper[4729]: I0127 14:57:14.060172 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:57:14 crc kubenswrapper[4729]: E0127 14:57:14.061208 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:57:26 crc kubenswrapper[4729]: I0127 14:57:26.056949 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:57:26 crc kubenswrapper[4729]: E0127 14:57:26.057845 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:57:30 crc kubenswrapper[4729]: I0127 14:57:30.432999 4729 generic.go:334] "Generic (PLEG): container finished" podID="33c4c74a-3a24-43e4-94ff-84a794d0db7d" containerID="5d5182d15ae566152e5b15dda61e6f2c9090d3fa4f65c805a76eb9854149b786" exitCode=0 Jan 27 14:57:30 crc kubenswrapper[4729]: I0127 14:57:30.433113 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" event={"ID":"33c4c74a-3a24-43e4-94ff-84a794d0db7d","Type":"ContainerDied","Data":"5d5182d15ae566152e5b15dda61e6f2c9090d3fa4f65c805a76eb9854149b786"} Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.004898 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.105534 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp8jm\" (UniqueName: \"kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm\") pod \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.105655 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam\") pod \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.105913 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory\") pod \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.105958 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0\") pod \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.106020 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle\") pod \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\" (UID: \"33c4c74a-3a24-43e4-94ff-84a794d0db7d\") " Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.114720 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm" (OuterVolumeSpecName: "kube-api-access-wp8jm") pod "33c4c74a-3a24-43e4-94ff-84a794d0db7d" (UID: "33c4c74a-3a24-43e4-94ff-84a794d0db7d"). InnerVolumeSpecName "kube-api-access-wp8jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.118213 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "33c4c74a-3a24-43e4-94ff-84a794d0db7d" (UID: "33c4c74a-3a24-43e4-94ff-84a794d0db7d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.143777 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "33c4c74a-3a24-43e4-94ff-84a794d0db7d" (UID: "33c4c74a-3a24-43e4-94ff-84a794d0db7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.144264 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "33c4c74a-3a24-43e4-94ff-84a794d0db7d" (UID: "33c4c74a-3a24-43e4-94ff-84a794d0db7d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.149201 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory" (OuterVolumeSpecName: "inventory") pod "33c4c74a-3a24-43e4-94ff-84a794d0db7d" (UID: "33c4c74a-3a24-43e4-94ff-84a794d0db7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.209501 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp8jm\" (UniqueName: \"kubernetes.io/projected/33c4c74a-3a24-43e4-94ff-84a794d0db7d-kube-api-access-wp8jm\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.209537 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.209549 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.209563 4729 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.209576 4729 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c4c74a-3a24-43e4-94ff-84a794d0db7d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.455601 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" event={"ID":"33c4c74a-3a24-43e4-94ff-84a794d0db7d","Type":"ContainerDied","Data":"ec9914816a3ad0cac9aeb3a9c699dc433441c48f1fcddac2d7829dc9d8a6cb07"} Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.455643 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec9914816a3ad0cac9aeb3a9c699dc433441c48f1fcddac2d7829dc9d8a6cb07" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.455750 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590236 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk"] Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590804 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590831 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590853 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590862 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590896 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590905 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590924 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33c4c74a-3a24-43e4-94ff-84a794d0db7d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590933 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="33c4c74a-3a24-43e4-94ff-84a794d0db7d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590944 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590953 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.590973 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.590980 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="extract-content" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.591000 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591006 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="extract-utilities" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.591037 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591045 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.591064 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591071 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: E0127 14:57:32.591090 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591096 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591365 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd5723d-4634-4257-b18b-a23fb3b2f9f7" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591395 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="33c4c74a-3a24-43e4-94ff-84a794d0db7d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591416 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15acfa4-46b5-4a62-b983-c6f54ac4fe27" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.591434 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f20ae30-5f00-4fdb-a886-0c46ba1d2972" containerName="registry-server" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.592490 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.599591 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.599678 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.599840 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.600006 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.599855 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.600712 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.601227 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.608414 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk"] Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730368 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730594 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730684 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730749 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730771 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-525sd\" (UniqueName: \"kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730861 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.730997 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.731051 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833274 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833353 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833465 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833595 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833665 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833689 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-525sd\" (UniqueName: \"kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833735 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833804 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.833845 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.834778 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.837598 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.838519 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.839397 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.839574 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.841131 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.847499 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.853921 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.860834 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-525sd\" (UniqueName: \"kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd\") pod \"nova-edpm-deployment-openstack-edpm-ipam-skhnk\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:32 crc kubenswrapper[4729]: I0127 14:57:32.914594 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 14:57:33 crc kubenswrapper[4729]: I0127 14:57:33.524999 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk"] Jan 27 14:57:33 crc kubenswrapper[4729]: I0127 14:57:33.531453 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 14:57:34 crc kubenswrapper[4729]: I0127 14:57:34.484502 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" event={"ID":"d4f6bdbc-1305-4c66-8d8c-a3425163fd27","Type":"ContainerStarted","Data":"9001cdfbe98dd55b0fe05cf27a8d2730f3df70c3c2f73da66dda573ac7703644"} Jan 27 14:57:34 crc kubenswrapper[4729]: I0127 14:57:34.484844 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" event={"ID":"d4f6bdbc-1305-4c66-8d8c-a3425163fd27","Type":"ContainerStarted","Data":"78f5dc645ef7b004f9b3f257acbd809efdee4e4c8f86f3c65dcf18166f6c007a"} Jan 27 14:57:34 crc kubenswrapper[4729]: I0127 14:57:34.506746 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" podStartSLOduration=1.948479692 podStartE2EDuration="2.506718837s" podCreationTimestamp="2026-01-27 14:57:32 +0000 UTC" firstStartedPulling="2026-01-27 14:57:33.527113244 +0000 UTC m=+3140.111304248" lastFinishedPulling="2026-01-27 14:57:34.085352389 +0000 UTC m=+3140.669543393" observedRunningTime="2026-01-27 14:57:34.504043776 +0000 UTC m=+3141.088234800" watchObservedRunningTime="2026-01-27 14:57:34.506718837 +0000 UTC m=+3141.090909861" Jan 27 14:57:41 crc kubenswrapper[4729]: I0127 14:57:41.051546 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:57:41 crc kubenswrapper[4729]: E0127 14:57:41.052553 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:57:56 crc kubenswrapper[4729]: I0127 14:57:56.056152 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:57:56 crc kubenswrapper[4729]: E0127 14:57:56.057105 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:58:09 crc kubenswrapper[4729]: I0127 14:58:09.051384 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:58:09 crc kubenswrapper[4729]: E0127 14:58:09.052348 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:58:21 crc kubenswrapper[4729]: I0127 14:58:21.051137 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:58:21 crc kubenswrapper[4729]: E0127 14:58:21.052094 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:58:34 crc kubenswrapper[4729]: I0127 14:58:34.058833 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:58:34 crc kubenswrapper[4729]: E0127 14:58:34.060001 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:58:49 crc kubenswrapper[4729]: I0127 14:58:49.051849 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:58:49 crc kubenswrapper[4729]: E0127 14:58:49.052825 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:59:00 crc kubenswrapper[4729]: I0127 14:59:00.053052 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:59:00 crc kubenswrapper[4729]: E0127 14:59:00.053721 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:59:15 crc kubenswrapper[4729]: I0127 14:59:15.051496 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:59:15 crc kubenswrapper[4729]: E0127 14:59:15.052717 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 14:59:30 crc kubenswrapper[4729]: I0127 14:59:30.052420 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 14:59:31 crc kubenswrapper[4729]: I0127 14:59:31.214087 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354"} Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.154437 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh"] Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.157305 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.160006 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.160367 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.168516 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh"] Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.271589 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757lm\" (UniqueName: \"kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.271989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.272166 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.377899 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757lm\" (UniqueName: \"kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.378180 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.378232 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.379487 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.386228 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.398751 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757lm\" (UniqueName: \"kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm\") pod \"collect-profiles-29492100-g9xxh\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:00 crc kubenswrapper[4729]: I0127 15:00:00.508271 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:01 crc kubenswrapper[4729]: I0127 15:00:01.037057 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh"] Jan 27 15:00:01 crc kubenswrapper[4729]: I0127 15:00:01.592718 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" event={"ID":"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b","Type":"ContainerStarted","Data":"53d92d5287bd09d84f67cb62e3b754f06541eeb7a6a47271c86791a8f532d122"} Jan 27 15:00:01 crc kubenswrapper[4729]: I0127 15:00:01.594271 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" event={"ID":"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b","Type":"ContainerStarted","Data":"0e11e05685fdb5d4a55edb847d9e7a042f6f1fef374c8b8e849851fa90990a0e"} Jan 27 15:00:02 crc kubenswrapper[4729]: I0127 15:00:02.606183 4729 generic.go:334] "Generic (PLEG): container finished" podID="bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" containerID="53d92d5287bd09d84f67cb62e3b754f06541eeb7a6a47271c86791a8f532d122" exitCode=0 Jan 27 15:00:02 crc kubenswrapper[4729]: I0127 15:00:02.606224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" event={"ID":"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b","Type":"ContainerDied","Data":"53d92d5287bd09d84f67cb62e3b754f06541eeb7a6a47271c86791a8f532d122"} Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.138372 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.302580 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-757lm\" (UniqueName: \"kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm\") pod \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.303048 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume\") pod \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.303156 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume\") pod \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\" (UID: \"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b\") " Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.303747 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume" (OuterVolumeSpecName: "config-volume") pod "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" (UID: "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.304467 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.316782 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" (UID: "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.323145 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm" (OuterVolumeSpecName: "kube-api-access-757lm") pod "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" (UID: "bdeeb7ac-7c92-41c9-b09c-aaaca507e15b"). InnerVolumeSpecName "kube-api-access-757lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.406463 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.406498 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-757lm\" (UniqueName: \"kubernetes.io/projected/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b-kube-api-access-757lm\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.669323 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" event={"ID":"bdeeb7ac-7c92-41c9-b09c-aaaca507e15b","Type":"ContainerDied","Data":"0e11e05685fdb5d4a55edb847d9e7a042f6f1fef374c8b8e849851fa90990a0e"} Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.669373 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e11e05685fdb5d4a55edb847d9e7a042f6f1fef374c8b8e849851fa90990a0e" Jan 27 15:00:04 crc kubenswrapper[4729]: I0127 15:00:04.669439 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh" Jan 27 15:00:05 crc kubenswrapper[4729]: I0127 15:00:05.231858 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt"] Jan 27 15:00:05 crc kubenswrapper[4729]: I0127 15:00:05.243585 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492055-4fgmt"] Jan 27 15:00:06 crc kubenswrapper[4729]: I0127 15:00:06.078845 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797af56f-5ea4-435e-b09b-0b0901afb74e" path="/var/lib/kubelet/pods/797af56f-5ea4-435e-b09b-0b0901afb74e/volumes" Jan 27 15:00:11 crc kubenswrapper[4729]: I0127 15:00:11.744926 4729 generic.go:334] "Generic (PLEG): container finished" podID="d4f6bdbc-1305-4c66-8d8c-a3425163fd27" containerID="9001cdfbe98dd55b0fe05cf27a8d2730f3df70c3c2f73da66dda573ac7703644" exitCode=0 Jan 27 15:00:11 crc kubenswrapper[4729]: I0127 15:00:11.745048 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" event={"ID":"d4f6bdbc-1305-4c66-8d8c-a3425163fd27","Type":"ContainerDied","Data":"9001cdfbe98dd55b0fe05cf27a8d2730f3df70c3c2f73da66dda573ac7703644"} Jan 27 15:00:12 crc kubenswrapper[4729]: I0127 15:00:12.715857 4729 scope.go:117] "RemoveContainer" containerID="a2d09aa8a36045ecc9324a73096ba42c99cdf49b9cf234a86dc6786639de15f8" Jan 27 15:00:12 crc kubenswrapper[4729]: I0127 15:00:12.741231 4729 scope.go:117] "RemoveContainer" containerID="d7d8902a1aa8c396d1caa2e36fec501628eea88c6e7e0712d4478d896cda08c7" Jan 27 15:00:12 crc kubenswrapper[4729]: I0127 15:00:12.830111 4729 scope.go:117] "RemoveContainer" containerID="14a3da0c98322cc6f12137f776f7d29b41f449a1f787258228151ef436f39401" Jan 27 15:00:12 crc kubenswrapper[4729]: I0127 15:00:12.877103 4729 scope.go:117] "RemoveContainer" containerID="a278762361dab3dec665068a6b7f5653067248dab716ef9eadbf97a7eae07fd6" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.315487 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.366971 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367116 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367366 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-525sd\" (UniqueName: \"kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367474 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367530 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367557 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367578 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367601 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.367619 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1\") pod \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\" (UID: \"d4f6bdbc-1305-4c66-8d8c-a3425163fd27\") " Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.374337 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.375358 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd" (OuterVolumeSpecName: "kube-api-access-525sd") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "kube-api-access-525sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.405247 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.407205 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory" (OuterVolumeSpecName: "inventory") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.429456 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.431021 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.432668 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.439389 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.439966 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d4f6bdbc-1305-4c66-8d8c-a3425163fd27" (UID: "d4f6bdbc-1305-4c66-8d8c-a3425163fd27"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.472355 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-525sd\" (UniqueName: \"kubernetes.io/projected/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-kube-api-access-525sd\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.472741 4729 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.472948 4729 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.472957 4729 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.473565 4729 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.473582 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.473592 4729 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.473601 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.473611 4729 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4f6bdbc-1305-4c66-8d8c-a3425163fd27-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.773968 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" event={"ID":"d4f6bdbc-1305-4c66-8d8c-a3425163fd27","Type":"ContainerDied","Data":"78f5dc645ef7b004f9b3f257acbd809efdee4e4c8f86f3c65dcf18166f6c007a"} Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.774021 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f5dc645ef7b004f9b3f257acbd809efdee4e4c8f86f3c65dcf18166f6c007a" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.774184 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-skhnk" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.965184 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp"] Jan 27 15:00:13 crc kubenswrapper[4729]: E0127 15:00:13.965711 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" containerName="collect-profiles" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.965723 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" containerName="collect-profiles" Jan 27 15:00:13 crc kubenswrapper[4729]: E0127 15:00:13.965746 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f6bdbc-1305-4c66-8d8c-a3425163fd27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.965752 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f6bdbc-1305-4c66-8d8c-a3425163fd27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.966045 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f6bdbc-1305-4c66-8d8c-a3425163fd27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.966061 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" containerName="collect-profiles" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.967006 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.969532 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.969850 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.970129 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.970971 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.984471 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.991069 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.991107 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.991134 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.991964 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.992113 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.992553 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:13 crc kubenswrapper[4729]: I0127 15:00:13.992688 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb8h\" (UniqueName: \"kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.007606 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp"] Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095062 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095159 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxb8h\" (UniqueName: \"kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095247 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095275 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095310 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095348 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.095396 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.098386 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.099612 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.099794 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.109181 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.111933 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.112475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.113794 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.122388 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.123662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxb8h\" (UniqueName: \"kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.125985 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.287303 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.296000 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:00:14 crc kubenswrapper[4729]: I0127 15:00:14.894658 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp"] Jan 27 15:00:15 crc kubenswrapper[4729]: I0127 15:00:15.489953 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:00:15 crc kubenswrapper[4729]: I0127 15:00:15.798190 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" event={"ID":"5639c133-4cde-40dc-a7f3-e716aaab5ca8","Type":"ContainerStarted","Data":"db94ae183dceda96fb651b01aa9ed12c95de44072d6d56634dfaeee4e8e77984"} Jan 27 15:00:16 crc kubenswrapper[4729]: I0127 15:00:16.810820 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" event={"ID":"5639c133-4cde-40dc-a7f3-e716aaab5ca8","Type":"ContainerStarted","Data":"0a1141430cdc338e0eeb723730a9925f6953148b3752d4003219b820b18a663d"} Jan 27 15:00:16 crc kubenswrapper[4729]: I0127 15:00:16.846498 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" podStartSLOduration=3.261551554 podStartE2EDuration="3.846482321s" podCreationTimestamp="2026-01-27 15:00:13 +0000 UTC" firstStartedPulling="2026-01-27 15:00:14.901771304 +0000 UTC m=+3301.485962308" lastFinishedPulling="2026-01-27 15:00:15.486702071 +0000 UTC m=+3302.070893075" observedRunningTime="2026-01-27 15:00:16.842747534 +0000 UTC m=+3303.426938538" watchObservedRunningTime="2026-01-27 15:00:16.846482321 +0000 UTC m=+3303.430673315" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.167805 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492101-z8cpg"] Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.170561 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.182224 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-z8cpg"] Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.319010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flnp4\" (UniqueName: \"kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.319444 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.319534 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.319643 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.422347 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flnp4\" (UniqueName: \"kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.422447 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.422518 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.422609 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.435912 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.436038 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.436248 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.442662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flnp4\" (UniqueName: \"kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4\") pod \"keystone-cron-29492101-z8cpg\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:00 crc kubenswrapper[4729]: I0127 15:01:00.511644 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:01 crc kubenswrapper[4729]: I0127 15:01:01.085822 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492101-z8cpg"] Jan 27 15:01:01 crc kubenswrapper[4729]: I0127 15:01:01.339967 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-z8cpg" event={"ID":"1c34a1bb-cfec-4b86-af1a-b633dd398427","Type":"ContainerStarted","Data":"8751f76405a0da61af1c57182daa57aa6b60c3850461ec6e6d4fae5c3e60af60"} Jan 27 15:01:02 crc kubenswrapper[4729]: I0127 15:01:02.361000 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-z8cpg" event={"ID":"1c34a1bb-cfec-4b86-af1a-b633dd398427","Type":"ContainerStarted","Data":"7c56d4b3be2896f097c7b05adecc84219f88322eeb47e848cb2230c7f05cb63e"} Jan 27 15:01:02 crc kubenswrapper[4729]: I0127 15:01:02.385549 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492101-z8cpg" podStartSLOduration=2.385524126 podStartE2EDuration="2.385524126s" podCreationTimestamp="2026-01-27 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:01:02.385316201 +0000 UTC m=+3348.969507225" watchObservedRunningTime="2026-01-27 15:01:02.385524126 +0000 UTC m=+3348.969715130" Jan 27 15:01:10 crc kubenswrapper[4729]: I0127 15:01:10.478738 4729 generic.go:334] "Generic (PLEG): container finished" podID="1c34a1bb-cfec-4b86-af1a-b633dd398427" containerID="7c56d4b3be2896f097c7b05adecc84219f88322eeb47e848cb2230c7f05cb63e" exitCode=0 Jan 27 15:01:10 crc kubenswrapper[4729]: I0127 15:01:10.478843 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-z8cpg" event={"ID":"1c34a1bb-cfec-4b86-af1a-b633dd398427","Type":"ContainerDied","Data":"7c56d4b3be2896f097c7b05adecc84219f88322eeb47e848cb2230c7f05cb63e"} Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.011106 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.168911 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flnp4\" (UniqueName: \"kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4\") pod \"1c34a1bb-cfec-4b86-af1a-b633dd398427\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.169429 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle\") pod \"1c34a1bb-cfec-4b86-af1a-b633dd398427\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.169481 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys\") pod \"1c34a1bb-cfec-4b86-af1a-b633dd398427\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.169650 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data\") pod \"1c34a1bb-cfec-4b86-af1a-b633dd398427\" (UID: \"1c34a1bb-cfec-4b86-af1a-b633dd398427\") " Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.176401 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1c34a1bb-cfec-4b86-af1a-b633dd398427" (UID: "1c34a1bb-cfec-4b86-af1a-b633dd398427"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.178030 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4" (OuterVolumeSpecName: "kube-api-access-flnp4") pod "1c34a1bb-cfec-4b86-af1a-b633dd398427" (UID: "1c34a1bb-cfec-4b86-af1a-b633dd398427"). InnerVolumeSpecName "kube-api-access-flnp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.217545 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c34a1bb-cfec-4b86-af1a-b633dd398427" (UID: "1c34a1bb-cfec-4b86-af1a-b633dd398427"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.244657 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data" (OuterVolumeSpecName: "config-data") pod "1c34a1bb-cfec-4b86-af1a-b633dd398427" (UID: "1c34a1bb-cfec-4b86-af1a-b633dd398427"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.273108 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.273454 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flnp4\" (UniqueName: \"kubernetes.io/projected/1c34a1bb-cfec-4b86-af1a-b633dd398427-kube-api-access-flnp4\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.273544 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.273635 4729 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1c34a1bb-cfec-4b86-af1a-b633dd398427-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.501869 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492101-z8cpg" event={"ID":"1c34a1bb-cfec-4b86-af1a-b633dd398427","Type":"ContainerDied","Data":"8751f76405a0da61af1c57182daa57aa6b60c3850461ec6e6d4fae5c3e60af60"} Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.501933 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8751f76405a0da61af1c57182daa57aa6b60c3850461ec6e6d4fae5c3e60af60" Jan 27 15:01:12 crc kubenswrapper[4729]: I0127 15:01:12.502193 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492101-z8cpg" Jan 27 15:01:52 crc kubenswrapper[4729]: I0127 15:01:52.655530 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:01:52 crc kubenswrapper[4729]: I0127 15:01:52.656175 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:02:22 crc kubenswrapper[4729]: I0127 15:02:22.655387 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:02:22 crc kubenswrapper[4729]: I0127 15:02:22.655999 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:02:52 crc kubenswrapper[4729]: I0127 15:02:52.655464 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:02:52 crc kubenswrapper[4729]: I0127 15:02:52.655930 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:02:52 crc kubenswrapper[4729]: I0127 15:02:52.655973 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:02:52 crc kubenswrapper[4729]: I0127 15:02:52.656895 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:02:52 crc kubenswrapper[4729]: I0127 15:02:52.656941 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354" gracePeriod=600 Jan 27 15:02:53 crc kubenswrapper[4729]: I0127 15:02:53.697368 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354" exitCode=0 Jan 27 15:02:53 crc kubenswrapper[4729]: I0127 15:02:53.697835 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354"} Jan 27 15:02:53 crc kubenswrapper[4729]: I0127 15:02:53.698016 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b"} Jan 27 15:02:53 crc kubenswrapper[4729]: I0127 15:02:53.698037 4729 scope.go:117] "RemoveContainer" containerID="bb5d9d200aba179a0bcadce02af20edb94a6e06174eacbe2b49f806e5dcd2ced" Jan 27 15:03:04 crc kubenswrapper[4729]: I0127 15:03:04.828814 4729 generic.go:334] "Generic (PLEG): container finished" podID="5639c133-4cde-40dc-a7f3-e716aaab5ca8" containerID="0a1141430cdc338e0eeb723730a9925f6953148b3752d4003219b820b18a663d" exitCode=0 Jan 27 15:03:04 crc kubenswrapper[4729]: I0127 15:03:04.829433 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" event={"ID":"5639c133-4cde-40dc-a7f3-e716aaab5ca8","Type":"ContainerDied","Data":"0a1141430cdc338e0eeb723730a9925f6953148b3752d4003219b820b18a663d"} Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.411037 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.478506 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxb8h\" (UniqueName: \"kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.479087 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.479240 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.480213 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.480748 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.481223 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.481378 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory\") pod \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\" (UID: \"5639c133-4cde-40dc-a7f3-e716aaab5ca8\") " Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.485265 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h" (OuterVolumeSpecName: "kube-api-access-vxb8h") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "kube-api-access-vxb8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.486820 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.513509 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.517826 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.530842 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.535013 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.539957 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory" (OuterVolumeSpecName: "inventory") pod "5639c133-4cde-40dc-a7f3-e716aaab5ca8" (UID: "5639c133-4cde-40dc-a7f3-e716aaab5ca8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589654 4729 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589696 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589709 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589719 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxb8h\" (UniqueName: \"kubernetes.io/projected/5639c133-4cde-40dc-a7f3-e716aaab5ca8-kube-api-access-vxb8h\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589727 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589736 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.589745 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5639c133-4cde-40dc-a7f3-e716aaab5ca8-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.855109 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" event={"ID":"5639c133-4cde-40dc-a7f3-e716aaab5ca8","Type":"ContainerDied","Data":"db94ae183dceda96fb651b01aa9ed12c95de44072d6d56634dfaeee4e8e77984"} Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.855189 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db94ae183dceda96fb651b01aa9ed12c95de44072d6d56634dfaeee4e8e77984" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.855237 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.986008 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64"] Jan 27 15:03:06 crc kubenswrapper[4729]: E0127 15:03:06.986672 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5639c133-4cde-40dc-a7f3-e716aaab5ca8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.986706 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="5639c133-4cde-40dc-a7f3-e716aaab5ca8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 15:03:06 crc kubenswrapper[4729]: E0127 15:03:06.986728 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c34a1bb-cfec-4b86-af1a-b633dd398427" containerName="keystone-cron" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.986761 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c34a1bb-cfec-4b86-af1a-b633dd398427" containerName="keystone-cron" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.987066 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="5639c133-4cde-40dc-a7f3-e716aaab5ca8" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.987112 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c34a1bb-cfec-4b86-af1a-b633dd398427" containerName="keystone-cron" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.988273 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.991368 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.991419 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.991534 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.992649 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.993082 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:03:06 crc kubenswrapper[4729]: I0127 15:03:06.998261 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64"] Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108441 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108662 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108719 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108764 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldxj\" (UniqueName: \"kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108843 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.108909 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.210722 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.210838 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.210916 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sldxj\" (UniqueName: \"kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.212063 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.212174 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.212280 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.212368 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.216591 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.216719 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.217356 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.218322 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.219658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.219687 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.231464 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sldxj\" (UniqueName: \"kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.332993 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.958022 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64"] Jan 27 15:03:07 crc kubenswrapper[4729]: I0127 15:03:07.963149 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:03:08 crc kubenswrapper[4729]: I0127 15:03:08.877512 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" event={"ID":"0b30179d-d4ac-44b0-9675-7f0ef071caf5","Type":"ContainerStarted","Data":"6f30516530df7d7bf10b3498ab09fb792c7cf30022e6750c9d5969f31fdb73c9"} Jan 27 15:03:09 crc kubenswrapper[4729]: I0127 15:03:09.890433 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" event={"ID":"0b30179d-d4ac-44b0-9675-7f0ef071caf5","Type":"ContainerStarted","Data":"066313577aa78e7d2b7c0e3a923d37e14c8b1bf1e59dad3d3ca2535af7e4f518"} Jan 27 15:03:09 crc kubenswrapper[4729]: I0127 15:03:09.921339 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" podStartSLOduration=3.092640957 podStartE2EDuration="3.921316753s" podCreationTimestamp="2026-01-27 15:03:06 +0000 UTC" firstStartedPulling="2026-01-27 15:03:07.962904981 +0000 UTC m=+3474.547095985" lastFinishedPulling="2026-01-27 15:03:08.791580777 +0000 UTC m=+3475.375771781" observedRunningTime="2026-01-27 15:03:09.908869223 +0000 UTC m=+3476.493060247" watchObservedRunningTime="2026-01-27 15:03:09.921316753 +0000 UTC m=+3476.505507767" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.208049 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.213917 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.227738 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.365724 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.365866 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.365975 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9nsp\" (UniqueName: \"kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.468090 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.468475 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.468693 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.468764 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9nsp\" (UniqueName: \"kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.469289 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.488480 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9nsp\" (UniqueName: \"kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp\") pod \"redhat-operators-fq78q\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:04 crc kubenswrapper[4729]: I0127 15:04:04.550423 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:05 crc kubenswrapper[4729]: I0127 15:04:05.071244 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:05 crc kubenswrapper[4729]: I0127 15:04:05.478911 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerDied","Data":"291acf2fdb2f2b4dbb5d0edc457c095268720fe74c0b854ff3fd7626d7d48994"} Jan 27 15:04:05 crc kubenswrapper[4729]: I0127 15:04:05.479366 4729 generic.go:334] "Generic (PLEG): container finished" podID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerID="291acf2fdb2f2b4dbb5d0edc457c095268720fe74c0b854ff3fd7626d7d48994" exitCode=0 Jan 27 15:04:05 crc kubenswrapper[4729]: I0127 15:04:05.480257 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerStarted","Data":"e4cbc6aa80e1d7a5a2719876ff4744206485f9d7c1d738bec6f7152f92ce75ef"} Jan 27 15:04:07 crc kubenswrapper[4729]: I0127 15:04:07.511097 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerStarted","Data":"680fad44a9bd649f3b81f71838035f0db58b9f5b05eb16b3c9634652a8bd1bf5"} Jan 27 15:04:17 crc kubenswrapper[4729]: I0127 15:04:17.097607 4729 generic.go:334] "Generic (PLEG): container finished" podID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerID="680fad44a9bd649f3b81f71838035f0db58b9f5b05eb16b3c9634652a8bd1bf5" exitCode=0 Jan 27 15:04:17 crc kubenswrapper[4729]: I0127 15:04:17.097715 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerDied","Data":"680fad44a9bd649f3b81f71838035f0db58b9f5b05eb16b3c9634652a8bd1bf5"} Jan 27 15:04:18 crc kubenswrapper[4729]: I0127 15:04:18.110664 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerStarted","Data":"b4d39fbd7248b70be5c1442fa0f111a62133d2d93cfef8db8e4c768bdfdc27e3"} Jan 27 15:04:18 crc kubenswrapper[4729]: I0127 15:04:18.143036 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fq78q" podStartSLOduration=1.933904426 podStartE2EDuration="14.143020585s" podCreationTimestamp="2026-01-27 15:04:04 +0000 UTC" firstStartedPulling="2026-01-27 15:04:05.481233774 +0000 UTC m=+3532.065424778" lastFinishedPulling="2026-01-27 15:04:17.690349933 +0000 UTC m=+3544.274540937" observedRunningTime="2026-01-27 15:04:18.141217898 +0000 UTC m=+3544.725408902" watchObservedRunningTime="2026-01-27 15:04:18.143020585 +0000 UTC m=+3544.727211589" Jan 27 15:04:24 crc kubenswrapper[4729]: I0127 15:04:24.551338 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:24 crc kubenswrapper[4729]: I0127 15:04:24.551969 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:25 crc kubenswrapper[4729]: I0127 15:04:25.606533 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fq78q" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="registry-server" probeResult="failure" output=< Jan 27 15:04:25 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:04:25 crc kubenswrapper[4729]: > Jan 27 15:04:34 crc kubenswrapper[4729]: I0127 15:04:34.607755 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:34 crc kubenswrapper[4729]: I0127 15:04:34.673219 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:35 crc kubenswrapper[4729]: I0127 15:04:35.412361 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:36 crc kubenswrapper[4729]: I0127 15:04:36.323317 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fq78q" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="registry-server" containerID="cri-o://b4d39fbd7248b70be5c1442fa0f111a62133d2d93cfef8db8e4c768bdfdc27e3" gracePeriod=2 Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.344743 4729 generic.go:334] "Generic (PLEG): container finished" podID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerID="b4d39fbd7248b70be5c1442fa0f111a62133d2d93cfef8db8e4c768bdfdc27e3" exitCode=0 Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.345214 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerDied","Data":"b4d39fbd7248b70be5c1442fa0f111a62133d2d93cfef8db8e4c768bdfdc27e3"} Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.508496 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.677718 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9nsp\" (UniqueName: \"kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp\") pod \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.677845 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities\") pod \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.678227 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content\") pod \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\" (UID: \"2aa2e98e-d1fe-432b-807e-b85ec9d76af4\") " Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.679998 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities" (OuterVolumeSpecName: "utilities") pod "2aa2e98e-d1fe-432b-807e-b85ec9d76af4" (UID: "2aa2e98e-d1fe-432b-807e-b85ec9d76af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.711217 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp" (OuterVolumeSpecName: "kube-api-access-p9nsp") pod "2aa2e98e-d1fe-432b-807e-b85ec9d76af4" (UID: "2aa2e98e-d1fe-432b-807e-b85ec9d76af4"). InnerVolumeSpecName "kube-api-access-p9nsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.782014 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9nsp\" (UniqueName: \"kubernetes.io/projected/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-kube-api-access-p9nsp\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.782326 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.911000 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2aa2e98e-d1fe-432b-807e-b85ec9d76af4" (UID: "2aa2e98e-d1fe-432b-807e-b85ec9d76af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:04:37 crc kubenswrapper[4729]: I0127 15:04:37.987412 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2aa2e98e-d1fe-432b-807e-b85ec9d76af4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.374825 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fq78q" event={"ID":"2aa2e98e-d1fe-432b-807e-b85ec9d76af4","Type":"ContainerDied","Data":"e4cbc6aa80e1d7a5a2719876ff4744206485f9d7c1d738bec6f7152f92ce75ef"} Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.374907 4729 scope.go:117] "RemoveContainer" containerID="b4d39fbd7248b70be5c1442fa0f111a62133d2d93cfef8db8e4c768bdfdc27e3" Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.374965 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fq78q" Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.404553 4729 scope.go:117] "RemoveContainer" containerID="680fad44a9bd649f3b81f71838035f0db58b9f5b05eb16b3c9634652a8bd1bf5" Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.408039 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.422315 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fq78q"] Jan 27 15:04:38 crc kubenswrapper[4729]: I0127 15:04:38.429070 4729 scope.go:117] "RemoveContainer" containerID="291acf2fdb2f2b4dbb5d0edc457c095268720fe74c0b854ff3fd7626d7d48994" Jan 27 15:04:40 crc kubenswrapper[4729]: I0127 15:04:40.068742 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" path="/var/lib/kubelet/pods/2aa2e98e-d1fe-432b-807e-b85ec9d76af4/volumes" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.961164 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:04:42 crc kubenswrapper[4729]: E0127 15:04:42.965564 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="registry-server" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.965680 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="registry-server" Jan 27 15:04:42 crc kubenswrapper[4729]: E0127 15:04:42.965769 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="extract-utilities" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.965826 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="extract-utilities" Jan 27 15:04:42 crc kubenswrapper[4729]: E0127 15:04:42.965925 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="extract-content" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.966026 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="extract-content" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.966374 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa2e98e-d1fe-432b-807e-b85ec9d76af4" containerName="registry-server" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.968400 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:42 crc kubenswrapper[4729]: I0127 15:04:42.986992 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.119690 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.120029 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.120270 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6p26\" (UniqueName: \"kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.223615 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.223823 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6p26\" (UniqueName: \"kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.223962 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.224563 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.224830 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.245570 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6p26\" (UniqueName: \"kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26\") pod \"community-operators-szknh\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.296793 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:43 crc kubenswrapper[4729]: I0127 15:04:43.941273 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:04:44 crc kubenswrapper[4729]: I0127 15:04:44.459416 4729 generic.go:334] "Generic (PLEG): container finished" podID="829e1e83-307c-4855-b333-e2dcbd795605" containerID="178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200" exitCode=0 Jan 27 15:04:44 crc kubenswrapper[4729]: I0127 15:04:44.459517 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerDied","Data":"178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200"} Jan 27 15:04:44 crc kubenswrapper[4729]: I0127 15:04:44.460018 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerStarted","Data":"731c3d6ef97d2c92a6466228441607f40e742209ca1c0d2a743a6993284a01cf"} Jan 27 15:04:46 crc kubenswrapper[4729]: I0127 15:04:46.482231 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerStarted","Data":"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441"} Jan 27 15:04:49 crc kubenswrapper[4729]: I0127 15:04:49.518041 4729 generic.go:334] "Generic (PLEG): container finished" podID="829e1e83-307c-4855-b333-e2dcbd795605" containerID="9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441" exitCode=0 Jan 27 15:04:49 crc kubenswrapper[4729]: I0127 15:04:49.518120 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerDied","Data":"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441"} Jan 27 15:04:51 crc kubenswrapper[4729]: I0127 15:04:51.548542 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerStarted","Data":"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848"} Jan 27 15:04:51 crc kubenswrapper[4729]: I0127 15:04:51.580646 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szknh" podStartSLOduration=3.730596828 podStartE2EDuration="9.580626356s" podCreationTimestamp="2026-01-27 15:04:42 +0000 UTC" firstStartedPulling="2026-01-27 15:04:44.463950899 +0000 UTC m=+3571.048141903" lastFinishedPulling="2026-01-27 15:04:50.313980427 +0000 UTC m=+3576.898171431" observedRunningTime="2026-01-27 15:04:51.57284991 +0000 UTC m=+3578.157040914" watchObservedRunningTime="2026-01-27 15:04:51.580626356 +0000 UTC m=+3578.164817360" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.448745 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.452084 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.464156 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.499459 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxqgw\" (UniqueName: \"kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.500101 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.500161 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.602111 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.602190 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.602399 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxqgw\" (UniqueName: \"kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.603066 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.603128 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.631797 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxqgw\" (UniqueName: \"kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw\") pod \"redhat-marketplace-f86qc\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:52 crc kubenswrapper[4729]: I0127 15:04:52.788201 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:04:53 crc kubenswrapper[4729]: I0127 15:04:53.299508 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:53 crc kubenswrapper[4729]: I0127 15:04:53.299895 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:53 crc kubenswrapper[4729]: I0127 15:04:53.354091 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:04:53 crc kubenswrapper[4729]: I0127 15:04:53.374321 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:04:53 crc kubenswrapper[4729]: W0127 15:04:53.387189 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod639b86cd_d1b4_4518_bab6_cc3fe6b443a9.slice/crio-e23c7937551be839416ed9ba26139fbf173ed9a3f17b527bd0ef8752743ed268 WatchSource:0}: Error finding container e23c7937551be839416ed9ba26139fbf173ed9a3f17b527bd0ef8752743ed268: Status 404 returned error can't find the container with id e23c7937551be839416ed9ba26139fbf173ed9a3f17b527bd0ef8752743ed268 Jan 27 15:04:53 crc kubenswrapper[4729]: I0127 15:04:53.573782 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerStarted","Data":"e23c7937551be839416ed9ba26139fbf173ed9a3f17b527bd0ef8752743ed268"} Jan 27 15:04:54 crc kubenswrapper[4729]: I0127 15:04:54.599141 4729 generic.go:334] "Generic (PLEG): container finished" podID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerID="9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b" exitCode=0 Jan 27 15:04:54 crc kubenswrapper[4729]: I0127 15:04:54.599282 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerDied","Data":"9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b"} Jan 27 15:04:56 crc kubenswrapper[4729]: I0127 15:04:56.627698 4729 generic.go:334] "Generic (PLEG): container finished" podID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerID="26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a" exitCode=0 Jan 27 15:04:56 crc kubenswrapper[4729]: I0127 15:04:56.627767 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerDied","Data":"26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a"} Jan 27 15:05:01 crc kubenswrapper[4729]: I0127 15:05:01.725178 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerStarted","Data":"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c"} Jan 27 15:05:01 crc kubenswrapper[4729]: I0127 15:05:01.747302 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f86qc" podStartSLOduration=3.343801992 podStartE2EDuration="9.747279679s" podCreationTimestamp="2026-01-27 15:04:52 +0000 UTC" firstStartedPulling="2026-01-27 15:04:54.601903593 +0000 UTC m=+3581.186094617" lastFinishedPulling="2026-01-27 15:05:01.0053813 +0000 UTC m=+3587.589572304" observedRunningTime="2026-01-27 15:05:01.743324284 +0000 UTC m=+3588.327515298" watchObservedRunningTime="2026-01-27 15:05:01.747279679 +0000 UTC m=+3588.331470703" Jan 27 15:05:02 crc kubenswrapper[4729]: I0127 15:05:02.789904 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:02 crc kubenswrapper[4729]: I0127 15:05:02.791147 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:02 crc kubenswrapper[4729]: I0127 15:05:02.850762 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:03 crc kubenswrapper[4729]: I0127 15:05:03.356499 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:05:03 crc kubenswrapper[4729]: I0127 15:05:03.421893 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:05:03 crc kubenswrapper[4729]: I0127 15:05:03.758751 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szknh" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="registry-server" containerID="cri-o://7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848" gracePeriod=2 Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.351091 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.437950 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6p26\" (UniqueName: \"kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26\") pod \"829e1e83-307c-4855-b333-e2dcbd795605\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.438032 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities\") pod \"829e1e83-307c-4855-b333-e2dcbd795605\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.438150 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content\") pod \"829e1e83-307c-4855-b333-e2dcbd795605\" (UID: \"829e1e83-307c-4855-b333-e2dcbd795605\") " Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.439125 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities" (OuterVolumeSpecName: "utilities") pod "829e1e83-307c-4855-b333-e2dcbd795605" (UID: "829e1e83-307c-4855-b333-e2dcbd795605"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.439590 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.448013 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26" (OuterVolumeSpecName: "kube-api-access-f6p26") pod "829e1e83-307c-4855-b333-e2dcbd795605" (UID: "829e1e83-307c-4855-b333-e2dcbd795605"). InnerVolumeSpecName "kube-api-access-f6p26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.499669 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829e1e83-307c-4855-b333-e2dcbd795605" (UID: "829e1e83-307c-4855-b333-e2dcbd795605"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.542341 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829e1e83-307c-4855-b333-e2dcbd795605-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.542381 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6p26\" (UniqueName: \"kubernetes.io/projected/829e1e83-307c-4855-b333-e2dcbd795605-kube-api-access-f6p26\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.775865 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szknh" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.775942 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerDied","Data":"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848"} Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.776059 4729 generic.go:334] "Generic (PLEG): container finished" podID="829e1e83-307c-4855-b333-e2dcbd795605" containerID="7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848" exitCode=0 Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.776127 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szknh" event={"ID":"829e1e83-307c-4855-b333-e2dcbd795605","Type":"ContainerDied","Data":"731c3d6ef97d2c92a6466228441607f40e742209ca1c0d2a743a6993284a01cf"} Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.776155 4729 scope.go:117] "RemoveContainer" containerID="7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.805741 4729 scope.go:117] "RemoveContainer" containerID="9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.826495 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.843455 4729 scope.go:117] "RemoveContainer" containerID="178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.843465 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szknh"] Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.919847 4729 scope.go:117] "RemoveContainer" containerID="7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848" Jan 27 15:05:04 crc kubenswrapper[4729]: E0127 15:05:04.921481 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848\": container with ID starting with 7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848 not found: ID does not exist" containerID="7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.921516 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848"} err="failed to get container status \"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848\": rpc error: code = NotFound desc = could not find container \"7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848\": container with ID starting with 7ba6cf87152784b4d9620cb6edf994a0b45351c4fae485dc4df3a7e5d4018848 not found: ID does not exist" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.921541 4729 scope.go:117] "RemoveContainer" containerID="9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441" Jan 27 15:05:04 crc kubenswrapper[4729]: E0127 15:05:04.922579 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441\": container with ID starting with 9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441 not found: ID does not exist" containerID="9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.922629 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441"} err="failed to get container status \"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441\": rpc error: code = NotFound desc = could not find container \"9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441\": container with ID starting with 9dd1cc72c3ed7590f39fee9e0a61d0c6378c39c19e40dcafd6c184a34ab14441 not found: ID does not exist" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.922655 4729 scope.go:117] "RemoveContainer" containerID="178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200" Jan 27 15:05:04 crc kubenswrapper[4729]: E0127 15:05:04.923424 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200\": container with ID starting with 178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200 not found: ID does not exist" containerID="178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200" Jan 27 15:05:04 crc kubenswrapper[4729]: I0127 15:05:04.923511 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200"} err="failed to get container status \"178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200\": rpc error: code = NotFound desc = could not find container \"178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200\": container with ID starting with 178a3f29ab84e33df51fafccccf38db27d1e543eae8f660261a60049396fd200 not found: ID does not exist" Jan 27 15:05:06 crc kubenswrapper[4729]: I0127 15:05:06.789753 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829e1e83-307c-4855-b333-e2dcbd795605" path="/var/lib/kubelet/pods/829e1e83-307c-4855-b333-e2dcbd795605/volumes" Jan 27 15:05:12 crc kubenswrapper[4729]: I0127 15:05:12.856407 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:12 crc kubenswrapper[4729]: I0127 15:05:12.933546 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:05:12 crc kubenswrapper[4729]: I0127 15:05:12.934128 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f86qc" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="registry-server" containerID="cri-o://79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c" gracePeriod=2 Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.561844 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.741641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities\") pod \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.741895 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content\") pod \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.741990 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxqgw\" (UniqueName: \"kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw\") pod \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\" (UID: \"639b86cd-d1b4-4518-bab6-cc3fe6b443a9\") " Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.742679 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities" (OuterVolumeSpecName: "utilities") pod "639b86cd-d1b4-4518-bab6-cc3fe6b443a9" (UID: "639b86cd-d1b4-4518-bab6-cc3fe6b443a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.753822 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw" (OuterVolumeSpecName: "kube-api-access-dxqgw") pod "639b86cd-d1b4-4518-bab6-cc3fe6b443a9" (UID: "639b86cd-d1b4-4518-bab6-cc3fe6b443a9"). InnerVolumeSpecName "kube-api-access-dxqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.765309 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "639b86cd-d1b4-4518-bab6-cc3fe6b443a9" (UID: "639b86cd-d1b4-4518-bab6-cc3fe6b443a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.845244 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.845303 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.845351 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxqgw\" (UniqueName: \"kubernetes.io/projected/639b86cd-d1b4-4518-bab6-cc3fe6b443a9-kube-api-access-dxqgw\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.897361 4729 generic.go:334] "Generic (PLEG): container finished" podID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerID="79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c" exitCode=0 Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.897425 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerDied","Data":"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c"} Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.897461 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f86qc" event={"ID":"639b86cd-d1b4-4518-bab6-cc3fe6b443a9","Type":"ContainerDied","Data":"e23c7937551be839416ed9ba26139fbf173ed9a3f17b527bd0ef8752743ed268"} Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.897487 4729 scope.go:117] "RemoveContainer" containerID="79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.897484 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f86qc" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.926454 4729 scope.go:117] "RemoveContainer" containerID="26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a" Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.935470 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.948388 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f86qc"] Jan 27 15:05:13 crc kubenswrapper[4729]: I0127 15:05:13.963270 4729 scope.go:117] "RemoveContainer" containerID="9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.018370 4729 scope.go:117] "RemoveContainer" containerID="79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c" Jan 27 15:05:14 crc kubenswrapper[4729]: E0127 15:05:14.019280 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c\": container with ID starting with 79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c not found: ID does not exist" containerID="79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.019319 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c"} err="failed to get container status \"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c\": rpc error: code = NotFound desc = could not find container \"79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c\": container with ID starting with 79646252fc9ce4fe93234bdedad78a0842e96ce811d3a5d71ea75eb9da9b622c not found: ID does not exist" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.019346 4729 scope.go:117] "RemoveContainer" containerID="26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a" Jan 27 15:05:14 crc kubenswrapper[4729]: E0127 15:05:14.019642 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a\": container with ID starting with 26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a not found: ID does not exist" containerID="26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.019672 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a"} err="failed to get container status \"26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a\": rpc error: code = NotFound desc = could not find container \"26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a\": container with ID starting with 26bb00defe2001b1844b112521984551eee75a2694dc01200b8c5e6b7dd0f21a not found: ID does not exist" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.019689 4729 scope.go:117] "RemoveContainer" containerID="9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b" Jan 27 15:05:14 crc kubenswrapper[4729]: E0127 15:05:14.020061 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b\": container with ID starting with 9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b not found: ID does not exist" containerID="9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.020091 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b"} err="failed to get container status \"9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b\": rpc error: code = NotFound desc = could not find container \"9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b\": container with ID starting with 9b7c6400b961f137566999948d9a551b191ff9d9d3556addd6b147174d7dfc5b not found: ID does not exist" Jan 27 15:05:14 crc kubenswrapper[4729]: I0127 15:05:14.064884 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" path="/var/lib/kubelet/pods/639b86cd-d1b4-4518-bab6-cc3fe6b443a9/volumes" Jan 27 15:05:22 crc kubenswrapper[4729]: I0127 15:05:22.655922 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:05:22 crc kubenswrapper[4729]: I0127 15:05:22.656550 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:05:31 crc kubenswrapper[4729]: I0127 15:05:31.086109 4729 generic.go:334] "Generic (PLEG): container finished" podID="0b30179d-d4ac-44b0-9675-7f0ef071caf5" containerID="066313577aa78e7d2b7c0e3a923d37e14c8b1bf1e59dad3d3ca2535af7e4f518" exitCode=0 Jan 27 15:05:31 crc kubenswrapper[4729]: I0127 15:05:31.086199 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" event={"ID":"0b30179d-d4ac-44b0-9675-7f0ef071caf5","Type":"ContainerDied","Data":"066313577aa78e7d2b7c0e3a923d37e14c8b1bf1e59dad3d3ca2535af7e4f518"} Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.750771 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853217 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sldxj\" (UniqueName: \"kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853330 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853356 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853456 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853588 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.853690 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.854438 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam\") pod \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\" (UID: \"0b30179d-d4ac-44b0-9675-7f0ef071caf5\") " Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.859746 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj" (OuterVolumeSpecName: "kube-api-access-sldxj") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "kube-api-access-sldxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.864606 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.888549 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.890077 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.903178 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory" (OuterVolumeSpecName: "inventory") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.905813 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.908974 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b30179d-d4ac-44b0-9675-7f0ef071caf5" (UID: "0b30179d-d4ac-44b0-9675-7f0ef071caf5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960143 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960183 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960197 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960208 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sldxj\" (UniqueName: \"kubernetes.io/projected/0b30179d-d4ac-44b0-9675-7f0ef071caf5-kube-api-access-sldxj\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960218 4729 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960231 4729 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:32 crc kubenswrapper[4729]: I0127 15:05:32.960244 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b30179d-d4ac-44b0-9675-7f0ef071caf5-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.147617 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" event={"ID":"0b30179d-d4ac-44b0-9675-7f0ef071caf5","Type":"ContainerDied","Data":"6f30516530df7d7bf10b3498ab09fb792c7cf30022e6750c9d5969f31fdb73c9"} Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.147713 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f30516530df7d7bf10b3498ab09fb792c7cf30022e6750c9d5969f31fdb73c9" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.148175 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.225349 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk"] Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.226421 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="extract-content" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.226561 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="extract-content" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.226636 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="extract-utilities" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.226721 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="extract-utilities" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.226810 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.226915 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.226995 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.227048 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.231278 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b30179d-d4ac-44b0-9675-7f0ef071caf5" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.231322 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b30179d-d4ac-44b0-9675-7f0ef071caf5" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.231411 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="extract-content" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.231422 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="extract-content" Jan 27 15:05:33 crc kubenswrapper[4729]: E0127 15:05:33.231440 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="extract-utilities" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.231453 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="extract-utilities" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.232315 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="829e1e83-307c-4855-b333-e2dcbd795605" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.232443 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="639b86cd-d1b4-4518-bab6-cc3fe6b443a9" containerName="registry-server" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.232526 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b30179d-d4ac-44b0-9675-7f0ef071caf5" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.233706 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.237862 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.238557 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.238780 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.239002 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.239213 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gtjbq" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.271479 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk"] Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.379925 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.380403 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.380728 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.380941 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.381087 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzs7m\" (UniqueName: \"kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.483556 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.484707 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.484786 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.484842 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzs7m\" (UniqueName: \"kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.485139 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.496034 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.498833 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.507599 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.517587 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.535357 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzs7m\" (UniqueName: \"kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b4nnk\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:33 crc kubenswrapper[4729]: I0127 15:05:33.756607 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:34 crc kubenswrapper[4729]: I0127 15:05:34.349736 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk"] Jan 27 15:05:35 crc kubenswrapper[4729]: I0127 15:05:35.167189 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" event={"ID":"e771e774-7470-4b36-a60e-bab34a04185a","Type":"ContainerStarted","Data":"c1a012e43320fdbd06c195f18f004d8ce271b4fdbe7de003690cb5961d6b8f19"} Jan 27 15:05:36 crc kubenswrapper[4729]: I0127 15:05:36.178592 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" event={"ID":"e771e774-7470-4b36-a60e-bab34a04185a","Type":"ContainerStarted","Data":"3120fb7b6972d617d74cc3b6df61179f2cec65c703ba57338c8b28cb574b8bcb"} Jan 27 15:05:36 crc kubenswrapper[4729]: I0127 15:05:36.197364 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" podStartSLOduration=1.782519508 podStartE2EDuration="3.19733137s" podCreationTimestamp="2026-01-27 15:05:33 +0000 UTC" firstStartedPulling="2026-01-27 15:05:34.349715381 +0000 UTC m=+3620.933906385" lastFinishedPulling="2026-01-27 15:05:35.764527243 +0000 UTC m=+3622.348718247" observedRunningTime="2026-01-27 15:05:36.196410216 +0000 UTC m=+3622.780601230" watchObservedRunningTime="2026-01-27 15:05:36.19733137 +0000 UTC m=+3622.781522384" Jan 27 15:05:52 crc kubenswrapper[4729]: I0127 15:05:52.655038 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:05:52 crc kubenswrapper[4729]: I0127 15:05:52.655636 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:05:55 crc kubenswrapper[4729]: I0127 15:05:55.407228 4729 generic.go:334] "Generic (PLEG): container finished" podID="e771e774-7470-4b36-a60e-bab34a04185a" containerID="3120fb7b6972d617d74cc3b6df61179f2cec65c703ba57338c8b28cb574b8bcb" exitCode=0 Jan 27 15:05:55 crc kubenswrapper[4729]: I0127 15:05:55.407355 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" event={"ID":"e771e774-7470-4b36-a60e-bab34a04185a","Type":"ContainerDied","Data":"3120fb7b6972d617d74cc3b6df61179f2cec65c703ba57338c8b28cb574b8bcb"} Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.173501 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.273580 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0\") pod \"e771e774-7470-4b36-a60e-bab34a04185a\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.274142 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam\") pod \"e771e774-7470-4b36-a60e-bab34a04185a\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.274297 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzs7m\" (UniqueName: \"kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m\") pod \"e771e774-7470-4b36-a60e-bab34a04185a\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.274386 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1\") pod \"e771e774-7470-4b36-a60e-bab34a04185a\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.274471 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory\") pod \"e771e774-7470-4b36-a60e-bab34a04185a\" (UID: \"e771e774-7470-4b36-a60e-bab34a04185a\") " Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.291490 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m" (OuterVolumeSpecName: "kube-api-access-tzs7m") pod "e771e774-7470-4b36-a60e-bab34a04185a" (UID: "e771e774-7470-4b36-a60e-bab34a04185a"). InnerVolumeSpecName "kube-api-access-tzs7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.318369 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "e771e774-7470-4b36-a60e-bab34a04185a" (UID: "e771e774-7470-4b36-a60e-bab34a04185a"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.335052 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "e771e774-7470-4b36-a60e-bab34a04185a" (UID: "e771e774-7470-4b36-a60e-bab34a04185a"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.338408 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e771e774-7470-4b36-a60e-bab34a04185a" (UID: "e771e774-7470-4b36-a60e-bab34a04185a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.360591 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory" (OuterVolumeSpecName: "inventory") pod "e771e774-7470-4b36-a60e-bab34a04185a" (UID: "e771e774-7470-4b36-a60e-bab34a04185a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.376997 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.377256 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzs7m\" (UniqueName: \"kubernetes.io/projected/e771e774-7470-4b36-a60e-bab34a04185a-kube-api-access-tzs7m\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.377314 4729 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.377374 4729 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-inventory\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.377440 4729 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e771e774-7470-4b36-a60e-bab34a04185a-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.429019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" event={"ID":"e771e774-7470-4b36-a60e-bab34a04185a","Type":"ContainerDied","Data":"c1a012e43320fdbd06c195f18f004d8ce271b4fdbe7de003690cb5961d6b8f19"} Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.429065 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1a012e43320fdbd06c195f18f004d8ce271b4fdbe7de003690cb5961d6b8f19" Jan 27 15:05:57 crc kubenswrapper[4729]: I0127 15:05:57.429073 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b4nnk" Jan 27 15:06:22 crc kubenswrapper[4729]: I0127 15:06:22.655566 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:06:22 crc kubenswrapper[4729]: I0127 15:06:22.656247 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:06:22 crc kubenswrapper[4729]: I0127 15:06:22.657697 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:06:22 crc kubenswrapper[4729]: I0127 15:06:22.658749 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:06:22 crc kubenswrapper[4729]: I0127 15:06:22.658809 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" gracePeriod=600 Jan 27 15:06:22 crc kubenswrapper[4729]: E0127 15:06:22.810175 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:06:23 crc kubenswrapper[4729]: I0127 15:06:23.764031 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" exitCode=0 Jan 27 15:06:23 crc kubenswrapper[4729]: I0127 15:06:23.764101 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b"} Jan 27 15:06:23 crc kubenswrapper[4729]: I0127 15:06:23.764413 4729 scope.go:117] "RemoveContainer" containerID="1168bac9b9c982642a62e9bd5008c45e1f4dd214d09d026196c14caf61da1354" Jan 27 15:06:23 crc kubenswrapper[4729]: I0127 15:06:23.764849 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:06:23 crc kubenswrapper[4729]: E0127 15:06:23.765290 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:06:38 crc kubenswrapper[4729]: I0127 15:06:38.054827 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:06:38 crc kubenswrapper[4729]: E0127 15:06:38.055642 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:06:53 crc kubenswrapper[4729]: I0127 15:06:53.051530 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:06:53 crc kubenswrapper[4729]: E0127 15:06:53.052488 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:07:05 crc kubenswrapper[4729]: I0127 15:07:05.051725 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:07:05 crc kubenswrapper[4729]: E0127 15:07:05.052622 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:07:20 crc kubenswrapper[4729]: I0127 15:07:20.051493 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:07:20 crc kubenswrapper[4729]: E0127 15:07:20.052788 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:07:35 crc kubenswrapper[4729]: I0127 15:07:35.051760 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:07:35 crc kubenswrapper[4729]: E0127 15:07:35.052587 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:07:48 crc kubenswrapper[4729]: I0127 15:07:48.051899 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:07:48 crc kubenswrapper[4729]: E0127 15:07:48.052753 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:07:59 crc kubenswrapper[4729]: I0127 15:07:59.051064 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:07:59 crc kubenswrapper[4729]: E0127 15:07:59.051905 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.128905 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:03 crc kubenswrapper[4729]: E0127 15:08:03.132331 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771e774-7470-4b36-a60e-bab34a04185a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.132371 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771e774-7470-4b36-a60e-bab34a04185a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.133055 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e771e774-7470-4b36-a60e-bab34a04185a" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.136748 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.210086 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.247701 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.247968 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.248060 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4c9p\" (UniqueName: \"kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.352253 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.352444 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4c9p\" (UniqueName: \"kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.352795 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.353554 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.354028 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.377998 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4c9p\" (UniqueName: \"kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p\") pod \"certified-operators-zsnlz\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:03 crc kubenswrapper[4729]: I0127 15:08:03.508684 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:04 crc kubenswrapper[4729]: I0127 15:08:04.112785 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:05 crc kubenswrapper[4729]: I0127 15:08:05.016286 4729 generic.go:334] "Generic (PLEG): container finished" podID="2191fc7d-925f-4afe-a9f0-682980136dff" containerID="aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9" exitCode=0 Jan 27 15:08:05 crc kubenswrapper[4729]: I0127 15:08:05.016490 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerDied","Data":"aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9"} Jan 27 15:08:05 crc kubenswrapper[4729]: I0127 15:08:05.016621 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerStarted","Data":"34565d236382e1474f32ac7ac502c5ae9f67bca02f39f05b5cc9784818103645"} Jan 27 15:08:07 crc kubenswrapper[4729]: I0127 15:08:07.042514 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerStarted","Data":"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989"} Jan 27 15:08:09 crc kubenswrapper[4729]: I0127 15:08:09.076301 4729 generic.go:334] "Generic (PLEG): container finished" podID="2191fc7d-925f-4afe-a9f0-682980136dff" containerID="3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989" exitCode=0 Jan 27 15:08:09 crc kubenswrapper[4729]: I0127 15:08:09.076378 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerDied","Data":"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989"} Jan 27 15:08:09 crc kubenswrapper[4729]: I0127 15:08:09.079606 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:08:11 crc kubenswrapper[4729]: I0127 15:08:11.050846 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:08:11 crc kubenswrapper[4729]: E0127 15:08:11.052598 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:08:11 crc kubenswrapper[4729]: I0127 15:08:11.099445 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerStarted","Data":"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8"} Jan 27 15:08:11 crc kubenswrapper[4729]: I0127 15:08:11.118755 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zsnlz" podStartSLOduration=3.27001625 podStartE2EDuration="8.118737321s" podCreationTimestamp="2026-01-27 15:08:03 +0000 UTC" firstStartedPulling="2026-01-27 15:08:05.018614774 +0000 UTC m=+3771.602805788" lastFinishedPulling="2026-01-27 15:08:09.867335834 +0000 UTC m=+3776.451526859" observedRunningTime="2026-01-27 15:08:11.118379381 +0000 UTC m=+3777.702570395" watchObservedRunningTime="2026-01-27 15:08:11.118737321 +0000 UTC m=+3777.702928335" Jan 27 15:08:13 crc kubenswrapper[4729]: I0127 15:08:13.509974 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:13 crc kubenswrapper[4729]: I0127 15:08:13.510537 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:13 crc kubenswrapper[4729]: I0127 15:08:13.563808 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:23 crc kubenswrapper[4729]: I0127 15:08:23.638735 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:23 crc kubenswrapper[4729]: I0127 15:08:23.742466 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.270932 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zsnlz" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="registry-server" containerID="cri-o://26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8" gracePeriod=2 Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.896816 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.979806 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities\") pod \"2191fc7d-925f-4afe-a9f0-682980136dff\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.980036 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content\") pod \"2191fc7d-925f-4afe-a9f0-682980136dff\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.980201 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4c9p\" (UniqueName: \"kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p\") pod \"2191fc7d-925f-4afe-a9f0-682980136dff\" (UID: \"2191fc7d-925f-4afe-a9f0-682980136dff\") " Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.981778 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities" (OuterVolumeSpecName: "utilities") pod "2191fc7d-925f-4afe-a9f0-682980136dff" (UID: "2191fc7d-925f-4afe-a9f0-682980136dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:24 crc kubenswrapper[4729]: I0127 15:08:24.987940 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p" (OuterVolumeSpecName: "kube-api-access-m4c9p") pod "2191fc7d-925f-4afe-a9f0-682980136dff" (UID: "2191fc7d-925f-4afe-a9f0-682980136dff"). InnerVolumeSpecName "kube-api-access-m4c9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.044843 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2191fc7d-925f-4afe-a9f0-682980136dff" (UID: "2191fc7d-925f-4afe-a9f0-682980136dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.052164 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:08:25 crc kubenswrapper[4729]: E0127 15:08:25.052478 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.083707 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.083757 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2191fc7d-925f-4afe-a9f0-682980136dff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.083775 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4c9p\" (UniqueName: \"kubernetes.io/projected/2191fc7d-925f-4afe-a9f0-682980136dff-kube-api-access-m4c9p\") on node \"crc\" DevicePath \"\"" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.285749 4729 generic.go:334] "Generic (PLEG): container finished" podID="2191fc7d-925f-4afe-a9f0-682980136dff" containerID="26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8" exitCode=0 Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.285797 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerDied","Data":"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8"} Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.285829 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zsnlz" event={"ID":"2191fc7d-925f-4afe-a9f0-682980136dff","Type":"ContainerDied","Data":"34565d236382e1474f32ac7ac502c5ae9f67bca02f39f05b5cc9784818103645"} Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.285849 4729 scope.go:117] "RemoveContainer" containerID="26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.285942 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zsnlz" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.330205 4729 scope.go:117] "RemoveContainer" containerID="3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.346345 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.363380 4729 scope.go:117] "RemoveContainer" containerID="aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.364439 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zsnlz"] Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.443854 4729 scope.go:117] "RemoveContainer" containerID="26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8" Jan 27 15:08:25 crc kubenswrapper[4729]: E0127 15:08:25.444541 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8\": container with ID starting with 26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8 not found: ID does not exist" containerID="26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.444578 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8"} err="failed to get container status \"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8\": rpc error: code = NotFound desc = could not find container \"26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8\": container with ID starting with 26c21a3c835ce315dad75e28d849df278773911747d6937a3225ad7d084873a8 not found: ID does not exist" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.444603 4729 scope.go:117] "RemoveContainer" containerID="3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989" Jan 27 15:08:25 crc kubenswrapper[4729]: E0127 15:08:25.445042 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989\": container with ID starting with 3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989 not found: ID does not exist" containerID="3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.445084 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989"} err="failed to get container status \"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989\": rpc error: code = NotFound desc = could not find container \"3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989\": container with ID starting with 3e4c2d30f0f62abbc2832d8e21a236f17c0f8209387d2ef1fbe2df004fe17989 not found: ID does not exist" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.445102 4729 scope.go:117] "RemoveContainer" containerID="aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9" Jan 27 15:08:25 crc kubenswrapper[4729]: E0127 15:08:25.445481 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9\": container with ID starting with aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9 not found: ID does not exist" containerID="aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9" Jan 27 15:08:25 crc kubenswrapper[4729]: I0127 15:08:25.445509 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9"} err="failed to get container status \"aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9\": rpc error: code = NotFound desc = could not find container \"aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9\": container with ID starting with aab5a13748c09cd63d27c0520dc78208883a6bfc487a3bdc5a09cb53b8cd3cc9 not found: ID does not exist" Jan 27 15:08:26 crc kubenswrapper[4729]: I0127 15:08:26.076316 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" path="/var/lib/kubelet/pods/2191fc7d-925f-4afe-a9f0-682980136dff/volumes" Jan 27 15:08:38 crc kubenswrapper[4729]: I0127 15:08:38.052077 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:08:38 crc kubenswrapper[4729]: E0127 15:08:38.053078 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:08:50 crc kubenswrapper[4729]: I0127 15:08:50.051541 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:08:50 crc kubenswrapper[4729]: E0127 15:08:50.052335 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:09:01 crc kubenswrapper[4729]: I0127 15:09:01.051917 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:09:01 crc kubenswrapper[4729]: E0127 15:09:01.052820 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:09:13 crc kubenswrapper[4729]: I0127 15:09:13.052309 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:09:13 crc kubenswrapper[4729]: E0127 15:09:13.055014 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:09:28 crc kubenswrapper[4729]: I0127 15:09:28.056918 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:09:28 crc kubenswrapper[4729]: E0127 15:09:28.059225 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:09:41 crc kubenswrapper[4729]: I0127 15:09:41.051752 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:09:41 crc kubenswrapper[4729]: E0127 15:09:41.052910 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:09:56 crc kubenswrapper[4729]: I0127 15:09:56.051277 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:09:56 crc kubenswrapper[4729]: E0127 15:09:56.052102 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:10:09 crc kubenswrapper[4729]: I0127 15:10:09.052494 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:10:09 crc kubenswrapper[4729]: E0127 15:10:09.053428 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:10:20 crc kubenswrapper[4729]: I0127 15:10:20.051499 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:10:20 crc kubenswrapper[4729]: E0127 15:10:20.052480 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:10:35 crc kubenswrapper[4729]: I0127 15:10:35.051083 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:10:35 crc kubenswrapper[4729]: E0127 15:10:35.052049 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:10:49 crc kubenswrapper[4729]: I0127 15:10:49.050820 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:10:49 crc kubenswrapper[4729]: E0127 15:10:49.051756 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:11:04 crc kubenswrapper[4729]: I0127 15:11:04.064590 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:11:04 crc kubenswrapper[4729]: E0127 15:11:04.065531 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:11:18 crc kubenswrapper[4729]: I0127 15:11:18.051001 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:11:18 crc kubenswrapper[4729]: E0127 15:11:18.051800 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:11:31 crc kubenswrapper[4729]: I0127 15:11:31.051269 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:11:32 crc kubenswrapper[4729]: I0127 15:11:32.476160 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a"} Jan 27 15:12:42 crc kubenswrapper[4729]: E0127 15:12:42.060890 4729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.171:33766->38.129.56.171:42429: write tcp 38.129.56.171:33766->38.129.56.171:42429: write: broken pipe Jan 27 15:13:52 crc kubenswrapper[4729]: I0127 15:13:52.361974 4729 trace.go:236] Trace[1070927485]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (27-Jan-2026 15:13:51.311) (total time: 1050ms): Jan 27 15:13:52 crc kubenswrapper[4729]: Trace[1070927485]: [1.050861733s] [1.050861733s] END Jan 27 15:13:52 crc kubenswrapper[4729]: I0127 15:13:52.655961 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:13:52 crc kubenswrapper[4729]: I0127 15:13:52.656035 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:14:22 crc kubenswrapper[4729]: I0127 15:14:22.655671 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:14:22 crc kubenswrapper[4729]: I0127 15:14:22.656340 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.655228 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.655856 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.655996 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.657009 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.657079 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a" gracePeriod=600 Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.904029 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a" exitCode=0 Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.904087 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a"} Jan 27 15:14:52 crc kubenswrapper[4729]: I0127 15:14:52.904161 4729 scope.go:117] "RemoveContainer" containerID="0081d898e269f25e15e51910cb89c17d64f139c2cede76c2202d9809c42aaa5b" Jan 27 15:14:53 crc kubenswrapper[4729]: I0127 15:14:53.918489 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7"} Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.171096 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9"] Jan 27 15:15:00 crc kubenswrapper[4729]: E0127 15:15:00.172307 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.172326 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="extract-content" Jan 27 15:15:00 crc kubenswrapper[4729]: E0127 15:15:00.172349 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.172356 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="extract-utilities" Jan 27 15:15:00 crc kubenswrapper[4729]: E0127 15:15:00.172377 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.172384 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.172682 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2191fc7d-925f-4afe-a9f0-682980136dff" containerName="registry-server" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.173771 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.176318 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.176968 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.199631 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9"] Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.318333 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.318595 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.319054 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsj5z\" (UniqueName: \"kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.422074 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.422206 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsj5z\" (UniqueName: \"kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.422348 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.423267 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.437264 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.439604 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsj5z\" (UniqueName: \"kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z\") pod \"collect-profiles-29492115-q2jd9\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:00 crc kubenswrapper[4729]: I0127 15:15:00.500213 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:01 crc kubenswrapper[4729]: I0127 15:15:01.748139 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9"] Jan 27 15:15:02 crc kubenswrapper[4729]: I0127 15:15:02.030833 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" event={"ID":"a1b8f26b-903d-4da4-b389-f805f726c63d","Type":"ContainerStarted","Data":"594af304ac5f53a53c5e016f3f3822420af85eb011ecf6aaa9d3890b08e8bb89"} Jan 27 15:15:03 crc kubenswrapper[4729]: I0127 15:15:03.046343 4729 generic.go:334] "Generic (PLEG): container finished" podID="a1b8f26b-903d-4da4-b389-f805f726c63d" containerID="0bbf2e0c6297cd8b537147bb11b80d1bd65d40f06d931914f16175e0cc7a3db7" exitCode=0 Jan 27 15:15:03 crc kubenswrapper[4729]: I0127 15:15:03.046600 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" event={"ID":"a1b8f26b-903d-4da4-b389-f805f726c63d","Type":"ContainerDied","Data":"0bbf2e0c6297cd8b537147bb11b80d1bd65d40f06d931914f16175e0cc7a3db7"} Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.556261 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.656644 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume\") pod \"a1b8f26b-903d-4da4-b389-f805f726c63d\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.656940 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsj5z\" (UniqueName: \"kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z\") pod \"a1b8f26b-903d-4da4-b389-f805f726c63d\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.656990 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume\") pod \"a1b8f26b-903d-4da4-b389-f805f726c63d\" (UID: \"a1b8f26b-903d-4da4-b389-f805f726c63d\") " Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.658082 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume" (OuterVolumeSpecName: "config-volume") pod "a1b8f26b-903d-4da4-b389-f805f726c63d" (UID: "a1b8f26b-903d-4da4-b389-f805f726c63d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.664442 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a1b8f26b-903d-4da4-b389-f805f726c63d" (UID: "a1b8f26b-903d-4da4-b389-f805f726c63d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.668524 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z" (OuterVolumeSpecName: "kube-api-access-bsj5z") pod "a1b8f26b-903d-4da4-b389-f805f726c63d" (UID: "a1b8f26b-903d-4da4-b389-f805f726c63d"). InnerVolumeSpecName "kube-api-access-bsj5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.759991 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsj5z\" (UniqueName: \"kubernetes.io/projected/a1b8f26b-903d-4da4-b389-f805f726c63d-kube-api-access-bsj5z\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.760040 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b8f26b-903d-4da4-b389-f805f726c63d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:04 crc kubenswrapper[4729]: I0127 15:15:04.760053 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1b8f26b-903d-4da4-b389-f805f726c63d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:15:05 crc kubenswrapper[4729]: I0127 15:15:05.071084 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" event={"ID":"a1b8f26b-903d-4da4-b389-f805f726c63d","Type":"ContainerDied","Data":"594af304ac5f53a53c5e016f3f3822420af85eb011ecf6aaa9d3890b08e8bb89"} Jan 27 15:15:05 crc kubenswrapper[4729]: I0127 15:15:05.071422 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594af304ac5f53a53c5e016f3f3822420af85eb011ecf6aaa9d3890b08e8bb89" Jan 27 15:15:05 crc kubenswrapper[4729]: I0127 15:15:05.071159 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9" Jan 27 15:15:05 crc kubenswrapper[4729]: I0127 15:15:05.660489 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs"] Jan 27 15:15:05 crc kubenswrapper[4729]: I0127 15:15:05.674621 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492070-2z2zs"] Jan 27 15:15:06 crc kubenswrapper[4729]: I0127 15:15:06.069147 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e51691-71cb-4c26-971b-6eda98d0b95f" path="/var/lib/kubelet/pods/80e51691-71cb-4c26-971b-6eda98d0b95f/volumes" Jan 27 15:15:09 crc kubenswrapper[4729]: I0127 15:15:09.888334 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:15:09 crc kubenswrapper[4729]: E0127 15:15:09.890057 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1b8f26b-903d-4da4-b389-f805f726c63d" containerName="collect-profiles" Jan 27 15:15:09 crc kubenswrapper[4729]: I0127 15:15:09.890076 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1b8f26b-903d-4da4-b389-f805f726c63d" containerName="collect-profiles" Jan 27 15:15:09 crc kubenswrapper[4729]: I0127 15:15:09.890422 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1b8f26b-903d-4da4-b389-f805f726c63d" containerName="collect-profiles" Jan 27 15:15:09 crc kubenswrapper[4729]: I0127 15:15:09.892670 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:09 crc kubenswrapper[4729]: I0127 15:15:09.910422 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.006452 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.006800 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.007123 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.109384 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.109520 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.109549 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.110193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.110229 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.151822 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg\") pod \"community-operators-j6qsc\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.226253 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:10 crc kubenswrapper[4729]: I0127 15:15:10.912450 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:15:11 crc kubenswrapper[4729]: I0127 15:15:11.138332 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerStarted","Data":"267a89381a9a961d42a3ad89a2e00e1f2e93e52150311fa006ad6613cfdf5bd8"} Jan 27 15:15:12 crc kubenswrapper[4729]: I0127 15:15:12.155261 4729 generic.go:334] "Generic (PLEG): container finished" podID="08d201f5-2887-48f1-ace1-7f86150b950f" containerID="6366f6125f43f59480d2e735f5bf51bbda30adbb6fb44621b9a278c0e0238ff3" exitCode=0 Jan 27 15:15:12 crc kubenswrapper[4729]: I0127 15:15:12.155534 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerDied","Data":"6366f6125f43f59480d2e735f5bf51bbda30adbb6fb44621b9a278c0e0238ff3"} Jan 27 15:15:12 crc kubenswrapper[4729]: I0127 15:15:12.158979 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:15:13 crc kubenswrapper[4729]: I0127 15:15:13.535059 4729 scope.go:117] "RemoveContainer" containerID="b81250f3ae57881d41110f6a33c22bba78f9b776c44170849e3a1487282d4142" Jan 27 15:15:14 crc kubenswrapper[4729]: I0127 15:15:14.195992 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerStarted","Data":"9736a35d6743415b0569166386dbb63255339378f3dfeab3839386f217389b97"} Jan 27 15:15:22 crc kubenswrapper[4729]: I0127 15:15:22.300778 4729 generic.go:334] "Generic (PLEG): container finished" podID="08d201f5-2887-48f1-ace1-7f86150b950f" containerID="9736a35d6743415b0569166386dbb63255339378f3dfeab3839386f217389b97" exitCode=0 Jan 27 15:15:22 crc kubenswrapper[4729]: I0127 15:15:22.302033 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerDied","Data":"9736a35d6743415b0569166386dbb63255339378f3dfeab3839386f217389b97"} Jan 27 15:15:23 crc kubenswrapper[4729]: I0127 15:15:23.321109 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerStarted","Data":"2c55a9b0bf75dbd1673affb48f3a45b55e2fcf2af5e53c8e56789ad20c2aa949"} Jan 27 15:15:23 crc kubenswrapper[4729]: I0127 15:15:23.357207 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j6qsc" podStartSLOduration=3.526764467 podStartE2EDuration="14.357183889s" podCreationTimestamp="2026-01-27 15:15:09 +0000 UTC" firstStartedPulling="2026-01-27 15:15:12.15824765 +0000 UTC m=+4198.742438654" lastFinishedPulling="2026-01-27 15:15:22.988667072 +0000 UTC m=+4209.572858076" observedRunningTime="2026-01-27 15:15:23.343799529 +0000 UTC m=+4209.927990553" watchObservedRunningTime="2026-01-27 15:15:23.357183889 +0000 UTC m=+4209.941374893" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.106744 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.110328 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.123307 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.201784 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.201837 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.201998 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbml\" (UniqueName: \"kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.300164 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.303338 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.305103 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmbml\" (UniqueName: \"kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.305599 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.305637 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.306475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.310273 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.325439 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.336655 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmbml\" (UniqueName: \"kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml\") pod \"redhat-marketplace-7cqfm\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.416779 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.417016 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.417224 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wc2\" (UniqueName: \"kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.450142 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.520187 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.520583 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.520729 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wc2\" (UniqueName: \"kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.521616 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.522811 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.570983 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wc2\" (UniqueName: \"kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2\") pod \"redhat-operators-82gcn\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:26 crc kubenswrapper[4729]: I0127 15:15:26.627761 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:27 crc kubenswrapper[4729]: I0127 15:15:27.211393 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:15:27 crc kubenswrapper[4729]: W0127 15:15:27.229339 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b30bb81_03c3_4500_b41c_7867569a79cb.slice/crio-7e07d7dfba16a54782d94cb197820f1a6f46169382c21e6cd3e08e76b4a00333 WatchSource:0}: Error finding container 7e07d7dfba16a54782d94cb197820f1a6f46169382c21e6cd3e08e76b4a00333: Status 404 returned error can't find the container with id 7e07d7dfba16a54782d94cb197820f1a6f46169382c21e6cd3e08e76b4a00333 Jan 27 15:15:27 crc kubenswrapper[4729]: I0127 15:15:27.425075 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerStarted","Data":"7e07d7dfba16a54782d94cb197820f1a6f46169382c21e6cd3e08e76b4a00333"} Jan 27 15:15:27 crc kubenswrapper[4729]: I0127 15:15:27.491042 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:15:27 crc kubenswrapper[4729]: W0127 15:15:27.524804 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc434edfa_6206_49a0_971d_410806d87655.slice/crio-023361b4b47f119a1c6c09f20f1b4c596fbfbd808cad81e05c37b6b228f140c2 WatchSource:0}: Error finding container 023361b4b47f119a1c6c09f20f1b4c596fbfbd808cad81e05c37b6b228f140c2: Status 404 returned error can't find the container with id 023361b4b47f119a1c6c09f20f1b4c596fbfbd808cad81e05c37b6b228f140c2 Jan 27 15:15:28 crc kubenswrapper[4729]: I0127 15:15:28.436045 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerStarted","Data":"023361b4b47f119a1c6c09f20f1b4c596fbfbd808cad81e05c37b6b228f140c2"} Jan 27 15:15:28 crc kubenswrapper[4729]: I0127 15:15:28.438315 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerStarted","Data":"aefb8b6948866b9e28950a3cb78a97a84fd48408c14f5b58e022fdad7250af61"} Jan 27 15:15:29 crc kubenswrapper[4729]: I0127 15:15:29.454740 4729 generic.go:334] "Generic (PLEG): container finished" podID="c434edfa-6206-49a0-971d-410806d87655" containerID="e8faad1b7858e07ac829be04e81365f5c7781b4d6797059e563f5eb7dde67317" exitCode=0 Jan 27 15:15:29 crc kubenswrapper[4729]: I0127 15:15:29.454854 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerDied","Data":"e8faad1b7858e07ac829be04e81365f5c7781b4d6797059e563f5eb7dde67317"} Jan 27 15:15:29 crc kubenswrapper[4729]: I0127 15:15:29.460660 4729 generic.go:334] "Generic (PLEG): container finished" podID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerID="aefb8b6948866b9e28950a3cb78a97a84fd48408c14f5b58e022fdad7250af61" exitCode=0 Jan 27 15:15:29 crc kubenswrapper[4729]: I0127 15:15:29.460714 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerDied","Data":"aefb8b6948866b9e28950a3cb78a97a84fd48408c14f5b58e022fdad7250af61"} Jan 27 15:15:30 crc kubenswrapper[4729]: I0127 15:15:30.226626 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:30 crc kubenswrapper[4729]: I0127 15:15:30.227019 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:15:31 crc kubenswrapper[4729]: I0127 15:15:31.287700 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6qsc" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:31 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:31 crc kubenswrapper[4729]: > Jan 27 15:15:31 crc kubenswrapper[4729]: I0127 15:15:31.483827 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerStarted","Data":"5dcba19277cc6f77e87ee6cfdbb416b0171f16b1a82c7f7672e65b34dcf84042"} Jan 27 15:15:31 crc kubenswrapper[4729]: I0127 15:15:31.486394 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerStarted","Data":"31a4616f76718251e7ffdfa8204668f941a92260384fa5819fb15812064dcf85"} Jan 27 15:15:36 crc kubenswrapper[4729]: I0127 15:15:36.563427 4729 generic.go:334] "Generic (PLEG): container finished" podID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerID="5dcba19277cc6f77e87ee6cfdbb416b0171f16b1a82c7f7672e65b34dcf84042" exitCode=0 Jan 27 15:15:36 crc kubenswrapper[4729]: I0127 15:15:36.563515 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerDied","Data":"5dcba19277cc6f77e87ee6cfdbb416b0171f16b1a82c7f7672e65b34dcf84042"} Jan 27 15:15:38 crc kubenswrapper[4729]: I0127 15:15:38.592522 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerStarted","Data":"f228bb8a97d0cd81383b9037943928de266504e98f7a73e20928dd89f863b731"} Jan 27 15:15:38 crc kubenswrapper[4729]: I0127 15:15:38.614252 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cqfm" podStartSLOduration=4.673643486 podStartE2EDuration="12.614234777s" podCreationTimestamp="2026-01-27 15:15:26 +0000 UTC" firstStartedPulling="2026-01-27 15:15:29.463155245 +0000 UTC m=+4216.047346249" lastFinishedPulling="2026-01-27 15:15:37.403746536 +0000 UTC m=+4223.987937540" observedRunningTime="2026-01-27 15:15:38.612950004 +0000 UTC m=+4225.197141018" watchObservedRunningTime="2026-01-27 15:15:38.614234777 +0000 UTC m=+4225.198425781" Jan 27 15:15:41 crc kubenswrapper[4729]: I0127 15:15:41.290470 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6qsc" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:41 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:41 crc kubenswrapper[4729]: > Jan 27 15:15:43 crc kubenswrapper[4729]: I0127 15:15:43.658734 4729 generic.go:334] "Generic (PLEG): container finished" podID="c434edfa-6206-49a0-971d-410806d87655" containerID="31a4616f76718251e7ffdfa8204668f941a92260384fa5819fb15812064dcf85" exitCode=0 Jan 27 15:15:43 crc kubenswrapper[4729]: I0127 15:15:43.659108 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerDied","Data":"31a4616f76718251e7ffdfa8204668f941a92260384fa5819fb15812064dcf85"} Jan 27 15:15:45 crc kubenswrapper[4729]: I0127 15:15:45.684560 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerStarted","Data":"052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3"} Jan 27 15:15:45 crc kubenswrapper[4729]: I0127 15:15:45.710189 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82gcn" podStartSLOduration=5.012041276 podStartE2EDuration="19.710157322s" podCreationTimestamp="2026-01-27 15:15:26 +0000 UTC" firstStartedPulling="2026-01-27 15:15:29.457517148 +0000 UTC m=+4216.041708162" lastFinishedPulling="2026-01-27 15:15:44.155633194 +0000 UTC m=+4230.739824208" observedRunningTime="2026-01-27 15:15:45.705641095 +0000 UTC m=+4232.289832119" watchObservedRunningTime="2026-01-27 15:15:45.710157322 +0000 UTC m=+4232.294348336" Jan 27 15:15:46 crc kubenswrapper[4729]: I0127 15:15:46.452452 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:46 crc kubenswrapper[4729]: I0127 15:15:46.453943 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:15:46 crc kubenswrapper[4729]: I0127 15:15:46.629161 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:46 crc kubenswrapper[4729]: I0127 15:15:46.629229 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:15:47 crc kubenswrapper[4729]: I0127 15:15:47.505630 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7cqfm" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:47 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:47 crc kubenswrapper[4729]: > Jan 27 15:15:47 crc kubenswrapper[4729]: I0127 15:15:47.680860 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:47 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:47 crc kubenswrapper[4729]: > Jan 27 15:15:51 crc kubenswrapper[4729]: I0127 15:15:51.280425 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6qsc" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:51 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:51 crc kubenswrapper[4729]: > Jan 27 15:15:57 crc kubenswrapper[4729]: I0127 15:15:57.511405 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-7cqfm" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:57 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:57 crc kubenswrapper[4729]: > Jan 27 15:15:57 crc kubenswrapper[4729]: I0127 15:15:57.680265 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:15:57 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:15:57 crc kubenswrapper[4729]: > Jan 27 15:16:01 crc kubenswrapper[4729]: I0127 15:16:01.289231 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-j6qsc" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:01 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:01 crc kubenswrapper[4729]: > Jan 27 15:16:06 crc kubenswrapper[4729]: I0127 15:16:06.527627 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:16:06 crc kubenswrapper[4729]: I0127 15:16:06.589180 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:16:06 crc kubenswrapper[4729]: I0127 15:16:06.786630 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:16:07 crc kubenswrapper[4729]: I0127 15:16:07.678833 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:07 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:07 crc kubenswrapper[4729]: > Jan 27 15:16:07 crc kubenswrapper[4729]: I0127 15:16:07.953064 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7cqfm" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" containerID="cri-o://f228bb8a97d0cd81383b9037943928de266504e98f7a73e20928dd89f863b731" gracePeriod=2 Jan 27 15:16:08 crc kubenswrapper[4729]: I0127 15:16:08.969908 4729 generic.go:334] "Generic (PLEG): container finished" podID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerID="f228bb8a97d0cd81383b9037943928de266504e98f7a73e20928dd89f863b731" exitCode=0 Jan 27 15:16:08 crc kubenswrapper[4729]: I0127 15:16:08.969966 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerDied","Data":"f228bb8a97d0cd81383b9037943928de266504e98f7a73e20928dd89f863b731"} Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.150977 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.173551 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content\") pod \"1b30bb81-03c3-4500-b41c-7867569a79cb\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.173859 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities\") pod \"1b30bb81-03c3-4500-b41c-7867569a79cb\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.174131 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmbml\" (UniqueName: \"kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml\") pod \"1b30bb81-03c3-4500-b41c-7867569a79cb\" (UID: \"1b30bb81-03c3-4500-b41c-7867569a79cb\") " Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.175765 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities" (OuterVolumeSpecName: "utilities") pod "1b30bb81-03c3-4500-b41c-7867569a79cb" (UID: "1b30bb81-03c3-4500-b41c-7867569a79cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.177557 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.188687 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml" (OuterVolumeSpecName: "kube-api-access-mmbml") pod "1b30bb81-03c3-4500-b41c-7867569a79cb" (UID: "1b30bb81-03c3-4500-b41c-7867569a79cb"). InnerVolumeSpecName "kube-api-access-mmbml". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.253259 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b30bb81-03c3-4500-b41c-7867569a79cb" (UID: "1b30bb81-03c3-4500-b41c-7867569a79cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.281386 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmbml\" (UniqueName: \"kubernetes.io/projected/1b30bb81-03c3-4500-b41c-7867569a79cb-kube-api-access-mmbml\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.281430 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b30bb81-03c3-4500-b41c-7867569a79cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.982238 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cqfm" event={"ID":"1b30bb81-03c3-4500-b41c-7867569a79cb","Type":"ContainerDied","Data":"7e07d7dfba16a54782d94cb197820f1a6f46169382c21e6cd3e08e76b4a00333"} Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.982328 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cqfm" Jan 27 15:16:09 crc kubenswrapper[4729]: I0127 15:16:09.982523 4729 scope.go:117] "RemoveContainer" containerID="f228bb8a97d0cd81383b9037943928de266504e98f7a73e20928dd89f863b731" Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.013155 4729 scope.go:117] "RemoveContainer" containerID="5dcba19277cc6f77e87ee6cfdbb416b0171f16b1a82c7f7672e65b34dcf84042" Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.023667 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.036231 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cqfm"] Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.043360 4729 scope.go:117] "RemoveContainer" containerID="aefb8b6948866b9e28950a3cb78a97a84fd48408c14f5b58e022fdad7250af61" Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.067983 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" path="/var/lib/kubelet/pods/1b30bb81-03c3-4500-b41c-7867569a79cb/volumes" Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.284721 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:16:10 crc kubenswrapper[4729]: I0127 15:16:10.341507 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:16:12 crc kubenswrapper[4729]: I0127 15:16:12.582166 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:16:12 crc kubenswrapper[4729]: I0127 15:16:12.582799 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j6qsc" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" containerID="cri-o://2c55a9b0bf75dbd1673affb48f3a45b55e2fcf2af5e53c8e56789ad20c2aa949" gracePeriod=2 Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.019801 4729 generic.go:334] "Generic (PLEG): container finished" podID="08d201f5-2887-48f1-ace1-7f86150b950f" containerID="2c55a9b0bf75dbd1673affb48f3a45b55e2fcf2af5e53c8e56789ad20c2aa949" exitCode=0 Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.020126 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerDied","Data":"2c55a9b0bf75dbd1673affb48f3a45b55e2fcf2af5e53c8e56789ad20c2aa949"} Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.147947 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.284989 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content\") pod \"08d201f5-2887-48f1-ace1-7f86150b950f\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.285261 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg\") pod \"08d201f5-2887-48f1-ace1-7f86150b950f\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.285351 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities\") pod \"08d201f5-2887-48f1-ace1-7f86150b950f\" (UID: \"08d201f5-2887-48f1-ace1-7f86150b950f\") " Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.292743 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg" (OuterVolumeSpecName: "kube-api-access-wmbxg") pod "08d201f5-2887-48f1-ace1-7f86150b950f" (UID: "08d201f5-2887-48f1-ace1-7f86150b950f"). InnerVolumeSpecName "kube-api-access-wmbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.295190 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities" (OuterVolumeSpecName: "utilities") pod "08d201f5-2887-48f1-ace1-7f86150b950f" (UID: "08d201f5-2887-48f1-ace1-7f86150b950f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.388698 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/08d201f5-2887-48f1-ace1-7f86150b950f-kube-api-access-wmbxg\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.388733 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.419843 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08d201f5-2887-48f1-ace1-7f86150b950f" (UID: "08d201f5-2887-48f1-ace1-7f86150b950f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:16:13 crc kubenswrapper[4729]: I0127 15:16:13.495800 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08d201f5-2887-48f1-ace1-7f86150b950f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.033757 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j6qsc" event={"ID":"08d201f5-2887-48f1-ace1-7f86150b950f","Type":"ContainerDied","Data":"267a89381a9a961d42a3ad89a2e00e1f2e93e52150311fa006ad6613cfdf5bd8"} Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.034319 4729 scope.go:117] "RemoveContainer" containerID="2c55a9b0bf75dbd1673affb48f3a45b55e2fcf2af5e53c8e56789ad20c2aa949" Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.034114 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j6qsc" Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.071921 4729 scope.go:117] "RemoveContainer" containerID="9736a35d6743415b0569166386dbb63255339378f3dfeab3839386f217389b97" Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.105723 4729 scope.go:117] "RemoveContainer" containerID="6366f6125f43f59480d2e735f5bf51bbda30adbb6fb44621b9a278c0e0238ff3" Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.134939 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:16:14 crc kubenswrapper[4729]: I0127 15:16:14.157367 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j6qsc"] Jan 27 15:16:16 crc kubenswrapper[4729]: I0127 15:16:16.068818 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" path="/var/lib/kubelet/pods/08d201f5-2887-48f1-ace1-7f86150b950f/volumes" Jan 27 15:16:17 crc kubenswrapper[4729]: I0127 15:16:17.685492 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:17 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:17 crc kubenswrapper[4729]: > Jan 27 15:16:22 crc kubenswrapper[4729]: E0127 15:16:22.468631 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:26 crc kubenswrapper[4729]: E0127 15:16:26.481975 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:27 crc kubenswrapper[4729]: E0127 15:16:27.461773 4729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.171:36598->38.129.56.171:42429: write tcp 38.129.56.171:36598->38.129.56.171:42429: write: broken pipe Jan 27 15:16:27 crc kubenswrapper[4729]: I0127 15:16:27.684646 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:27 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:27 crc kubenswrapper[4729]: > Jan 27 15:16:32 crc kubenswrapper[4729]: E0127 15:16:32.833824 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:37 crc kubenswrapper[4729]: I0127 15:16:37.678413 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:37 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:37 crc kubenswrapper[4729]: > Jan 27 15:16:41 crc kubenswrapper[4729]: E0127 15:16:41.761766 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:42 crc kubenswrapper[4729]: E0127 15:16:42.882743 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:47 crc kubenswrapper[4729]: I0127 15:16:47.682844 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:47 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:47 crc kubenswrapper[4729]: > Jan 27 15:16:48 crc kubenswrapper[4729]: E0127 15:16:48.272293 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:48 crc kubenswrapper[4729]: E0127 15:16:48.272316 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:52 crc kubenswrapper[4729]: E0127 15:16:52.931322 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:56 crc kubenswrapper[4729]: E0127 15:16:56.796668 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:16:57 crc kubenswrapper[4729]: I0127 15:16:57.692156 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:16:57 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:16:57 crc kubenswrapper[4729]: > Jan 27 15:17:03 crc kubenswrapper[4729]: E0127 15:17:03.008303 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:17:07 crc kubenswrapper[4729]: I0127 15:17:07.686408 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:17:07 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:17:07 crc kubenswrapper[4729]: > Jan 27 15:17:11 crc kubenswrapper[4729]: E0127 15:17:11.759781 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:17:13 crc kubenswrapper[4729]: E0127 15:17:13.097221 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d201f5_2887_48f1_ace1_7f86150b950f.slice\": RecentStats: unable to find data in memory cache]" Jan 27 15:17:17 crc kubenswrapper[4729]: I0127 15:17:17.679255 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" probeResult="failure" output=< Jan 27 15:17:17 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:17:17 crc kubenswrapper[4729]: > Jan 27 15:17:17 crc kubenswrapper[4729]: I0127 15:17:17.679854 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:17 crc kubenswrapper[4729]: I0127 15:17:17.680724 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3"} pod="openshift-marketplace/redhat-operators-82gcn" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 27 15:17:17 crc kubenswrapper[4729]: I0127 15:17:17.680766 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" containerID="cri-o://052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3" gracePeriod=30 Jan 27 15:17:22 crc kubenswrapper[4729]: I0127 15:17:22.655775 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:17:22 crc kubenswrapper[4729]: I0127 15:17:22.656449 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:17:26 crc kubenswrapper[4729]: I0127 15:17:26.907599 4729 generic.go:334] "Generic (PLEG): container finished" podID="c434edfa-6206-49a0-971d-410806d87655" containerID="052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3" exitCode=0 Jan 27 15:17:26 crc kubenswrapper[4729]: I0127 15:17:26.907659 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerDied","Data":"052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3"} Jan 27 15:17:27 crc kubenswrapper[4729]: I0127 15:17:27.923891 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerStarted","Data":"bfadf7f11f318339a9a55ea541e73802c0e045a4de5b5ccf27b19fc62d575d3f"} Jan 27 15:17:36 crc kubenswrapper[4729]: I0127 15:17:36.628653 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:36 crc kubenswrapper[4729]: I0127 15:17:36.629403 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:36 crc kubenswrapper[4729]: I0127 15:17:36.687175 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:37 crc kubenswrapper[4729]: I0127 15:17:37.093559 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:37 crc kubenswrapper[4729]: I0127 15:17:37.151642 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:17:39 crc kubenswrapper[4729]: I0127 15:17:39.060217 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82gcn" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" containerID="cri-o://bfadf7f11f318339a9a55ea541e73802c0e045a4de5b5ccf27b19fc62d575d3f" gracePeriod=2 Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.080456 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerDied","Data":"bfadf7f11f318339a9a55ea541e73802c0e045a4de5b5ccf27b19fc62d575d3f"} Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.080821 4729 scope.go:117] "RemoveContainer" containerID="052b1497392c73609e83acf9c1a5f027771781db16fbf8962d18e1d9fcab11d3" Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.080383 4729 generic.go:334] "Generic (PLEG): container finished" podID="c434edfa-6206-49a0-971d-410806d87655" containerID="bfadf7f11f318339a9a55ea541e73802c0e045a4de5b5ccf27b19fc62d575d3f" exitCode=0 Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.747062 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.865894 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content\") pod \"c434edfa-6206-49a0-971d-410806d87655\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.866015 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities\") pod \"c434edfa-6206-49a0-971d-410806d87655\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.866076 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wc2\" (UniqueName: \"kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2\") pod \"c434edfa-6206-49a0-971d-410806d87655\" (UID: \"c434edfa-6206-49a0-971d-410806d87655\") " Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.867958 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities" (OuterVolumeSpecName: "utilities") pod "c434edfa-6206-49a0-971d-410806d87655" (UID: "c434edfa-6206-49a0-971d-410806d87655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.901588 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2" (OuterVolumeSpecName: "kube-api-access-j9wc2") pod "c434edfa-6206-49a0-971d-410806d87655" (UID: "c434edfa-6206-49a0-971d-410806d87655"). InnerVolumeSpecName "kube-api-access-j9wc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.969820 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:40 crc kubenswrapper[4729]: I0127 15:17:40.969899 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9wc2\" (UniqueName: \"kubernetes.io/projected/c434edfa-6206-49a0-971d-410806d87655-kube-api-access-j9wc2\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.035827 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c434edfa-6206-49a0-971d-410806d87655" (UID: "c434edfa-6206-49a0-971d-410806d87655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.072255 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c434edfa-6206-49a0-971d-410806d87655-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.096191 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82gcn" event={"ID":"c434edfa-6206-49a0-971d-410806d87655","Type":"ContainerDied","Data":"023361b4b47f119a1c6c09f20f1b4c596fbfbd808cad81e05c37b6b228f140c2"} Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.096247 4729 scope.go:117] "RemoveContainer" containerID="bfadf7f11f318339a9a55ea541e73802c0e045a4de5b5ccf27b19fc62d575d3f" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.096303 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82gcn" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.139047 4729 scope.go:117] "RemoveContainer" containerID="31a4616f76718251e7ffdfa8204668f941a92260384fa5819fb15812064dcf85" Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.151285 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.169809 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82gcn"] Jan 27 15:17:41 crc kubenswrapper[4729]: I0127 15:17:41.190899 4729 scope.go:117] "RemoveContainer" containerID="e8faad1b7858e07ac829be04e81365f5c7781b4d6797059e563f5eb7dde67317" Jan 27 15:17:42 crc kubenswrapper[4729]: I0127 15:17:42.065055 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c434edfa-6206-49a0-971d-410806d87655" path="/var/lib/kubelet/pods/c434edfa-6206-49a0-971d-410806d87655/volumes" Jan 27 15:17:52 crc kubenswrapper[4729]: I0127 15:17:52.655237 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:17:52 crc kubenswrapper[4729]: I0127 15:17:52.656315 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:18:22 crc kubenswrapper[4729]: I0127 15:18:22.655933 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:18:22 crc kubenswrapper[4729]: I0127 15:18:22.656995 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:18:22 crc kubenswrapper[4729]: I0127 15:18:22.657084 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:18:22 crc kubenswrapper[4729]: I0127 15:18:22.659061 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:18:22 crc kubenswrapper[4729]: I0127 15:18:22.659192 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" gracePeriod=600 Jan 27 15:18:22 crc kubenswrapper[4729]: E0127 15:18:22.821028 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:18:23 crc kubenswrapper[4729]: I0127 15:18:23.720547 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" exitCode=0 Jan 27 15:18:23 crc kubenswrapper[4729]: I0127 15:18:23.722145 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7"} Jan 27 15:18:23 crc kubenswrapper[4729]: I0127 15:18:23.722265 4729 scope.go:117] "RemoveContainer" containerID="c84f54723b147f334aeb8dea01ea3f8e3a099d597cf12aba62583d361196513a" Jan 27 15:18:23 crc kubenswrapper[4729]: I0127 15:18:23.728094 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:18:23 crc kubenswrapper[4729]: E0127 15:18:23.730824 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:18:39 crc kubenswrapper[4729]: I0127 15:18:39.051441 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:18:39 crc kubenswrapper[4729]: E0127 15:18:39.052224 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:18:50 crc kubenswrapper[4729]: I0127 15:18:50.052511 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:18:50 crc kubenswrapper[4729]: E0127 15:18:50.059069 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:19:04 crc kubenswrapper[4729]: I0127 15:19:04.061061 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:19:04 crc kubenswrapper[4729]: E0127 15:19:04.062180 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.510815 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511771 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511791 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511838 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511847 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511860 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511868 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511902 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511911 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511933 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511941 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511960 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511968 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="extract-utilities" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.511979 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.511987 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.512013 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512020 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="extract-content" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.512047 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512055 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512338 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b30bb81-03c3-4500-b41c-7867569a79cb" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512360 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="08d201f5-2887-48f1-ace1-7f86150b950f" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512372 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512390 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: E0127 15:19:05.512667 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.512680 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c434edfa-6206-49a0-971d-410806d87655" containerName="registry-server" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.514568 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.532651 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.564192 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.564270 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbq48\" (UniqueName: \"kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.564594 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.667542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.667677 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbq48\" (UniqueName: \"kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.667860 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.668234 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.668257 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.692147 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbq48\" (UniqueName: \"kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48\") pod \"certified-operators-vrlrk\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:05 crc kubenswrapper[4729]: I0127 15:19:05.878388 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:06 crc kubenswrapper[4729]: I0127 15:19:06.536060 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:07 crc kubenswrapper[4729]: I0127 15:19:07.296007 4729 generic.go:334] "Generic (PLEG): container finished" podID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerID="d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad" exitCode=0 Jan 27 15:19:07 crc kubenswrapper[4729]: I0127 15:19:07.296283 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerDied","Data":"d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad"} Jan 27 15:19:07 crc kubenswrapper[4729]: I0127 15:19:07.296310 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerStarted","Data":"5c35293cd9f7a8a4ee24d0d06d459df1f8b27ccf09d708aff9fdce3c2db9d409"} Jan 27 15:19:09 crc kubenswrapper[4729]: I0127 15:19:09.325172 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerStarted","Data":"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071"} Jan 27 15:19:12 crc kubenswrapper[4729]: I0127 15:19:12.375851 4729 generic.go:334] "Generic (PLEG): container finished" podID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerID="8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071" exitCode=0 Jan 27 15:19:12 crc kubenswrapper[4729]: I0127 15:19:12.376029 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerDied","Data":"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071"} Jan 27 15:19:15 crc kubenswrapper[4729]: I0127 15:19:15.436585 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerStarted","Data":"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd"} Jan 27 15:19:15 crc kubenswrapper[4729]: I0127 15:19:15.460087 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrlrk" podStartSLOduration=3.930494995 podStartE2EDuration="10.460070951s" podCreationTimestamp="2026-01-27 15:19:05 +0000 UTC" firstStartedPulling="2026-01-27 15:19:07.297998146 +0000 UTC m=+4433.882189150" lastFinishedPulling="2026-01-27 15:19:13.827574102 +0000 UTC m=+4440.411765106" observedRunningTime="2026-01-27 15:19:15.459389663 +0000 UTC m=+4442.043580687" watchObservedRunningTime="2026-01-27 15:19:15.460070951 +0000 UTC m=+4442.044261965" Jan 27 15:19:15 crc kubenswrapper[4729]: I0127 15:19:15.880791 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:15 crc kubenswrapper[4729]: I0127 15:19:15.880943 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:16 crc kubenswrapper[4729]: I0127 15:19:16.942531 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vrlrk" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="registry-server" probeResult="failure" output=< Jan 27 15:19:16 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:19:16 crc kubenswrapper[4729]: > Jan 27 15:19:20 crc kubenswrapper[4729]: I0127 15:19:20.052328 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:19:20 crc kubenswrapper[4729]: E0127 15:19:20.053544 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:19:25 crc kubenswrapper[4729]: I0127 15:19:25.930504 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:26 crc kubenswrapper[4729]: I0127 15:19:25.999941 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:26 crc kubenswrapper[4729]: I0127 15:19:26.168838 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:27 crc kubenswrapper[4729]: I0127 15:19:27.577217 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrlrk" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="registry-server" containerID="cri-o://ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd" gracePeriod=2 Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.256166 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.457199 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities\") pod \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.457481 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbq48\" (UniqueName: \"kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48\") pod \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.457581 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content\") pod \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\" (UID: \"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398\") " Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.458954 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities" (OuterVolumeSpecName: "utilities") pod "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" (UID: "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.465566 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48" (OuterVolumeSpecName: "kube-api-access-kbq48") pod "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" (UID: "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398"). InnerVolumeSpecName "kube-api-access-kbq48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.512191 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" (UID: "a1c3b682-9f8e-48dd-acde-ee0c0a5ce398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.560822 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbq48\" (UniqueName: \"kubernetes.io/projected/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-kube-api-access-kbq48\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.560865 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.560892 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.593237 4729 generic.go:334] "Generic (PLEG): container finished" podID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerID="ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd" exitCode=0 Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.593281 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerDied","Data":"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd"} Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.593308 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrlrk" event={"ID":"a1c3b682-9f8e-48dd-acde-ee0c0a5ce398","Type":"ContainerDied","Data":"5c35293cd9f7a8a4ee24d0d06d459df1f8b27ccf09d708aff9fdce3c2db9d409"} Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.593327 4729 scope.go:117] "RemoveContainer" containerID="ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.593461 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrlrk" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.628638 4729 scope.go:117] "RemoveContainer" containerID="8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.635001 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.647640 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrlrk"] Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.671724 4729 scope.go:117] "RemoveContainer" containerID="d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.738212 4729 scope.go:117] "RemoveContainer" containerID="ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd" Jan 27 15:19:28 crc kubenswrapper[4729]: E0127 15:19:28.739125 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd\": container with ID starting with ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd not found: ID does not exist" containerID="ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.739160 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd"} err="failed to get container status \"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd\": rpc error: code = NotFound desc = could not find container \"ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd\": container with ID starting with ab444e2ddf34427d6bd0f2c969f10cc65d3de9484506ceb265bb1305f0e5babd not found: ID does not exist" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.739187 4729 scope.go:117] "RemoveContainer" containerID="8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071" Jan 27 15:19:28 crc kubenswrapper[4729]: E0127 15:19:28.739715 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071\": container with ID starting with 8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071 not found: ID does not exist" containerID="8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.739751 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071"} err="failed to get container status \"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071\": rpc error: code = NotFound desc = could not find container \"8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071\": container with ID starting with 8be64e901d70053a1f87662f232e9cbe6ec46d235e0fa8dc21b15293e3f4b071 not found: ID does not exist" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.739765 4729 scope.go:117] "RemoveContainer" containerID="d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad" Jan 27 15:19:28 crc kubenswrapper[4729]: E0127 15:19:28.740245 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad\": container with ID starting with d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad not found: ID does not exist" containerID="d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad" Jan 27 15:19:28 crc kubenswrapper[4729]: I0127 15:19:28.740269 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad"} err="failed to get container status \"d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad\": rpc error: code = NotFound desc = could not find container \"d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad\": container with ID starting with d09740d62d79c7639f8c0486302709396e56c0ca96867d8619a1a39d3e33ddad not found: ID does not exist" Jan 27 15:19:30 crc kubenswrapper[4729]: I0127 15:19:30.080688 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" path="/var/lib/kubelet/pods/a1c3b682-9f8e-48dd-acde-ee0c0a5ce398/volumes" Jan 27 15:19:31 crc kubenswrapper[4729]: I0127 15:19:31.051233 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:19:31 crc kubenswrapper[4729]: E0127 15:19:31.052514 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:19:44 crc kubenswrapper[4729]: I0127 15:19:44.059594 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:19:44 crc kubenswrapper[4729]: E0127 15:19:44.060546 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:19:58 crc kubenswrapper[4729]: I0127 15:19:58.051640 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:19:58 crc kubenswrapper[4729]: E0127 15:19:58.055182 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:20:10 crc kubenswrapper[4729]: I0127 15:20:10.052022 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:20:10 crc kubenswrapper[4729]: E0127 15:20:10.053333 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:20:23 crc kubenswrapper[4729]: I0127 15:20:23.051457 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:20:23 crc kubenswrapper[4729]: E0127 15:20:23.052348 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:20:36 crc kubenswrapper[4729]: I0127 15:20:36.052187 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:20:36 crc kubenswrapper[4729]: E0127 15:20:36.053144 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:20:51 crc kubenswrapper[4729]: I0127 15:20:51.051478 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:20:51 crc kubenswrapper[4729]: E0127 15:20:51.052842 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:21:02 crc kubenswrapper[4729]: I0127 15:21:02.051896 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:21:02 crc kubenswrapper[4729]: E0127 15:21:02.052937 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:21:14 crc kubenswrapper[4729]: I0127 15:21:14.062752 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:21:14 crc kubenswrapper[4729]: E0127 15:21:14.063854 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:21:25 crc kubenswrapper[4729]: I0127 15:21:25.051359 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:21:25 crc kubenswrapper[4729]: E0127 15:21:25.052380 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:21:39 crc kubenswrapper[4729]: I0127 15:21:39.050726 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:21:39 crc kubenswrapper[4729]: E0127 15:21:39.051567 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:21:52 crc kubenswrapper[4729]: I0127 15:21:52.052673 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:21:52 crc kubenswrapper[4729]: E0127 15:21:52.053831 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:22:03 crc kubenswrapper[4729]: I0127 15:22:03.067095 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:22:03 crc kubenswrapper[4729]: E0127 15:22:03.068459 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:22:16 crc kubenswrapper[4729]: I0127 15:22:16.053415 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:22:16 crc kubenswrapper[4729]: E0127 15:22:16.055467 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:22:28 crc kubenswrapper[4729]: I0127 15:22:28.052564 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:22:28 crc kubenswrapper[4729]: E0127 15:22:28.054917 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:22:41 crc kubenswrapper[4729]: I0127 15:22:41.051353 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:22:41 crc kubenswrapper[4729]: E0127 15:22:41.052244 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:22:54 crc kubenswrapper[4729]: I0127 15:22:54.063895 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:22:54 crc kubenswrapper[4729]: E0127 15:22:54.064936 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:23:08 crc kubenswrapper[4729]: I0127 15:23:08.051915 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:23:08 crc kubenswrapper[4729]: E0127 15:23:08.052633 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:23:19 crc kubenswrapper[4729]: I0127 15:23:19.051133 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:23:19 crc kubenswrapper[4729]: E0127 15:23:19.053163 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:23:34 crc kubenswrapper[4729]: I0127 15:23:34.059422 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:23:34 crc kubenswrapper[4729]: I0127 15:23:34.434000 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307"} Jan 27 15:25:52 crc kubenswrapper[4729]: I0127 15:25:52.654770 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:25:52 crc kubenswrapper[4729]: I0127 15:25:52.655478 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.025513 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:25:57 crc kubenswrapper[4729]: E0127 15:25:57.026864 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="extract-content" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.026903 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="extract-content" Jan 27 15:25:57 crc kubenswrapper[4729]: E0127 15:25:57.026929 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="registry-server" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.026937 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="registry-server" Jan 27 15:25:57 crc kubenswrapper[4729]: E0127 15:25:57.026974 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="extract-utilities" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.026983 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="extract-utilities" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.027299 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c3b682-9f8e-48dd-acde-ee0c0a5ce398" containerName="registry-server" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.029607 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.037783 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.107840 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5j4c\" (UniqueName: \"kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.107973 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.108054 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.210283 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.210404 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.210647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5j4c\" (UniqueName: \"kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.211684 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.212505 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.235236 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5j4c\" (UniqueName: \"kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c\") pod \"community-operators-pr2r2\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:57 crc kubenswrapper[4729]: I0127 15:25:57.364415 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:25:58 crc kubenswrapper[4729]: I0127 15:25:58.093780 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:25:58 crc kubenswrapper[4729]: W0127 15:25:58.109204 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c65a141_347a_46a0_b6dc_4059c5b5ef53.slice/crio-aec121cdeb0b4d5ece118a22c838a362b0458cc106ada756b2764a12393f04a0 WatchSource:0}: Error finding container aec121cdeb0b4d5ece118a22c838a362b0458cc106ada756b2764a12393f04a0: Status 404 returned error can't find the container with id aec121cdeb0b4d5ece118a22c838a362b0458cc106ada756b2764a12393f04a0 Jan 27 15:25:58 crc kubenswrapper[4729]: I0127 15:25:58.219356 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerStarted","Data":"aec121cdeb0b4d5ece118a22c838a362b0458cc106ada756b2764a12393f04a0"} Jan 27 15:25:59 crc kubenswrapper[4729]: I0127 15:25:59.232585 4729 generic.go:334] "Generic (PLEG): container finished" podID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerID="7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215" exitCode=0 Jan 27 15:25:59 crc kubenswrapper[4729]: I0127 15:25:59.232678 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerDied","Data":"7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215"} Jan 27 15:25:59 crc kubenswrapper[4729]: I0127 15:25:59.236582 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:26:02 crc kubenswrapper[4729]: I0127 15:26:02.265588 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerStarted","Data":"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3"} Jan 27 15:26:05 crc kubenswrapper[4729]: I0127 15:26:05.313805 4729 generic.go:334] "Generic (PLEG): container finished" podID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerID="35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3" exitCode=0 Jan 27 15:26:05 crc kubenswrapper[4729]: I0127 15:26:05.313932 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerDied","Data":"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3"} Jan 27 15:26:06 crc kubenswrapper[4729]: I0127 15:26:06.330252 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerStarted","Data":"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe"} Jan 27 15:26:06 crc kubenswrapper[4729]: I0127 15:26:06.356222 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pr2r2" podStartSLOduration=3.768299693 podStartE2EDuration="10.356197223s" podCreationTimestamp="2026-01-27 15:25:56 +0000 UTC" firstStartedPulling="2026-01-27 15:25:59.236364153 +0000 UTC m=+4845.820555157" lastFinishedPulling="2026-01-27 15:26:05.824261683 +0000 UTC m=+4852.408452687" observedRunningTime="2026-01-27 15:26:06.35033565 +0000 UTC m=+4852.934526654" watchObservedRunningTime="2026-01-27 15:26:06.356197223 +0000 UTC m=+4852.940388227" Jan 27 15:26:07 crc kubenswrapper[4729]: I0127 15:26:07.365104 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:07 crc kubenswrapper[4729]: I0127 15:26:07.365869 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:08 crc kubenswrapper[4729]: I0127 15:26:08.423303 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-pr2r2" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="registry-server" probeResult="failure" output=< Jan 27 15:26:08 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:26:08 crc kubenswrapper[4729]: > Jan 27 15:26:17 crc kubenswrapper[4729]: I0127 15:26:17.787648 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:17 crc kubenswrapper[4729]: I0127 15:26:17.849155 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:18 crc kubenswrapper[4729]: I0127 15:26:18.036355 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:26:19 crc kubenswrapper[4729]: I0127 15:26:19.482764 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pr2r2" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="registry-server" containerID="cri-o://b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe" gracePeriod=2 Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.162222 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.227748 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content\") pod \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.227952 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities\") pod \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.228124 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5j4c\" (UniqueName: \"kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c\") pod \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\" (UID: \"3c65a141-347a-46a0-b6dc-4059c5b5ef53\") " Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.230626 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities" (OuterVolumeSpecName: "utilities") pod "3c65a141-347a-46a0-b6dc-4059c5b5ef53" (UID: "3c65a141-347a-46a0-b6dc-4059c5b5ef53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.259504 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c" (OuterVolumeSpecName: "kube-api-access-r5j4c") pod "3c65a141-347a-46a0-b6dc-4059c5b5ef53" (UID: "3c65a141-347a-46a0-b6dc-4059c5b5ef53"). InnerVolumeSpecName "kube-api-access-r5j4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.294984 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c65a141-347a-46a0-b6dc-4059c5b5ef53" (UID: "3c65a141-347a-46a0-b6dc-4059c5b5ef53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.332126 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.332180 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5j4c\" (UniqueName: \"kubernetes.io/projected/3c65a141-347a-46a0-b6dc-4059c5b5ef53-kube-api-access-r5j4c\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.332192 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c65a141-347a-46a0-b6dc-4059c5b5ef53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.498355 4729 generic.go:334] "Generic (PLEG): container finished" podID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerID="b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe" exitCode=0 Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.498406 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerDied","Data":"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe"} Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.498450 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pr2r2" event={"ID":"3c65a141-347a-46a0-b6dc-4059c5b5ef53","Type":"ContainerDied","Data":"aec121cdeb0b4d5ece118a22c838a362b0458cc106ada756b2764a12393f04a0"} Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.498475 4729 scope.go:117] "RemoveContainer" containerID="b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.499127 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pr2r2" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.527475 4729 scope.go:117] "RemoveContainer" containerID="35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.544673 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.558195 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pr2r2"] Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.569763 4729 scope.go:117] "RemoveContainer" containerID="7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.623785 4729 scope.go:117] "RemoveContainer" containerID="b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe" Jan 27 15:26:20 crc kubenswrapper[4729]: E0127 15:26:20.624253 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe\": container with ID starting with b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe not found: ID does not exist" containerID="b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.624287 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe"} err="failed to get container status \"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe\": rpc error: code = NotFound desc = could not find container \"b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe\": container with ID starting with b3e0fa9b0f63635b6387095b855495c3ddfe3cd74100b168901563e6cbea1dbe not found: ID does not exist" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.624315 4729 scope.go:117] "RemoveContainer" containerID="35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3" Jan 27 15:26:20 crc kubenswrapper[4729]: E0127 15:26:20.624632 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3\": container with ID starting with 35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3 not found: ID does not exist" containerID="35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.624668 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3"} err="failed to get container status \"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3\": rpc error: code = NotFound desc = could not find container \"35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3\": container with ID starting with 35a2ee289b44b6989d9d7356c3a6de748aa9bf9c7257c6d27d1882648f9aa5e3 not found: ID does not exist" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.624695 4729 scope.go:117] "RemoveContainer" containerID="7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215" Jan 27 15:26:20 crc kubenswrapper[4729]: E0127 15:26:20.625035 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215\": container with ID starting with 7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215 not found: ID does not exist" containerID="7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215" Jan 27 15:26:20 crc kubenswrapper[4729]: I0127 15:26:20.625074 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215"} err="failed to get container status \"7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215\": rpc error: code = NotFound desc = could not find container \"7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215\": container with ID starting with 7b11550586dd2145ca43b5d52646e7ccac90176ab8c19b40284eb3fb86d0e215 not found: ID does not exist" Jan 27 15:26:22 crc kubenswrapper[4729]: I0127 15:26:22.064901 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" path="/var/lib/kubelet/pods/3c65a141-347a-46a0-b6dc-4059c5b5ef53/volumes" Jan 27 15:26:22 crc kubenswrapper[4729]: I0127 15:26:22.655183 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:26:22 crc kubenswrapper[4729]: I0127 15:26:22.655271 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.374846 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:28 crc kubenswrapper[4729]: E0127 15:26:28.376171 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="extract-content" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.376186 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="extract-content" Jan 27 15:26:28 crc kubenswrapper[4729]: E0127 15:26:28.376207 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="registry-server" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.376214 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="registry-server" Jan 27 15:26:28 crc kubenswrapper[4729]: E0127 15:26:28.376240 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="extract-utilities" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.376248 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="extract-utilities" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.376516 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c65a141-347a-46a0-b6dc-4059c5b5ef53" containerName="registry-server" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.378513 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.393648 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.443181 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.443433 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.443609 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4vb\" (UniqueName: \"kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.546609 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.546797 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.547043 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4vb\" (UniqueName: \"kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.547565 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.547630 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.576698 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4vb\" (UniqueName: \"kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb\") pod \"redhat-marketplace-ghpws\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:28 crc kubenswrapper[4729]: I0127 15:26:28.703264 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:29 crc kubenswrapper[4729]: I0127 15:26:29.368202 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:30 crc kubenswrapper[4729]: W0127 15:26:30.042314 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6420e0d0_e48e_4682_be55_453e7ef7996d.slice/crio-278202fa794e5f483e93e805d93bcd2a61156943fdddc96c9cd74d68c3fc4d2a WatchSource:0}: Error finding container 278202fa794e5f483e93e805d93bcd2a61156943fdddc96c9cd74d68c3fc4d2a: Status 404 returned error can't find the container with id 278202fa794e5f483e93e805d93bcd2a61156943fdddc96c9cd74d68c3fc4d2a Jan 27 15:26:30 crc kubenswrapper[4729]: I0127 15:26:30.628678 4729 generic.go:334] "Generic (PLEG): container finished" podID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerID="354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23" exitCode=0 Jan 27 15:26:30 crc kubenswrapper[4729]: I0127 15:26:30.628748 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerDied","Data":"354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23"} Jan 27 15:26:30 crc kubenswrapper[4729]: I0127 15:26:30.629312 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerStarted","Data":"278202fa794e5f483e93e805d93bcd2a61156943fdddc96c9cd74d68c3fc4d2a"} Jan 27 15:26:31 crc kubenswrapper[4729]: I0127 15:26:31.643482 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerStarted","Data":"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4"} Jan 27 15:26:32 crc kubenswrapper[4729]: I0127 15:26:32.657202 4729 generic.go:334] "Generic (PLEG): container finished" podID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerID="6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4" exitCode=0 Jan 27 15:26:32 crc kubenswrapper[4729]: I0127 15:26:32.657269 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerDied","Data":"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4"} Jan 27 15:26:33 crc kubenswrapper[4729]: I0127 15:26:33.675010 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerStarted","Data":"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3"} Jan 27 15:26:33 crc kubenswrapper[4729]: I0127 15:26:33.714541 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghpws" podStartSLOduration=3.151833866 podStartE2EDuration="5.714494188s" podCreationTimestamp="2026-01-27 15:26:28 +0000 UTC" firstStartedPulling="2026-01-27 15:26:30.632095546 +0000 UTC m=+4877.216286550" lastFinishedPulling="2026-01-27 15:26:33.194755868 +0000 UTC m=+4879.778946872" observedRunningTime="2026-01-27 15:26:33.70576302 +0000 UTC m=+4880.289954034" watchObservedRunningTime="2026-01-27 15:26:33.714494188 +0000 UTC m=+4880.298685202" Jan 27 15:26:38 crc kubenswrapper[4729]: I0127 15:26:38.703817 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:38 crc kubenswrapper[4729]: I0127 15:26:38.704130 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:38 crc kubenswrapper[4729]: I0127 15:26:38.754900 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:38 crc kubenswrapper[4729]: I0127 15:26:38.813298 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:38 crc kubenswrapper[4729]: I0127 15:26:38.997606 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:40 crc kubenswrapper[4729]: I0127 15:26:40.746545 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ghpws" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="registry-server" containerID="cri-o://9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3" gracePeriod=2 Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.400110 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.525146 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content\") pod \"6420e0d0-e48e-4682-be55-453e7ef7996d\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.525371 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities\") pod \"6420e0d0-e48e-4682-be55-453e7ef7996d\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.525507 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk4vb\" (UniqueName: \"kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb\") pod \"6420e0d0-e48e-4682-be55-453e7ef7996d\" (UID: \"6420e0d0-e48e-4682-be55-453e7ef7996d\") " Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.529940 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities" (OuterVolumeSpecName: "utilities") pod "6420e0d0-e48e-4682-be55-453e7ef7996d" (UID: "6420e0d0-e48e-4682-be55-453e7ef7996d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.543165 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb" (OuterVolumeSpecName: "kube-api-access-kk4vb") pod "6420e0d0-e48e-4682-be55-453e7ef7996d" (UID: "6420e0d0-e48e-4682-be55-453e7ef7996d"). InnerVolumeSpecName "kube-api-access-kk4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.602813 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6420e0d0-e48e-4682-be55-453e7ef7996d" (UID: "6420e0d0-e48e-4682-be55-453e7ef7996d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.629015 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.629726 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6420e0d0-e48e-4682-be55-453e7ef7996d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.629759 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk4vb\" (UniqueName: \"kubernetes.io/projected/6420e0d0-e48e-4682-be55-453e7ef7996d-kube-api-access-kk4vb\") on node \"crc\" DevicePath \"\"" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.759580 4729 generic.go:334] "Generic (PLEG): container finished" podID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerID="9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3" exitCode=0 Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.759637 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerDied","Data":"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3"} Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.759679 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpws" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.759701 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpws" event={"ID":"6420e0d0-e48e-4682-be55-453e7ef7996d","Type":"ContainerDied","Data":"278202fa794e5f483e93e805d93bcd2a61156943fdddc96c9cd74d68c3fc4d2a"} Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.759743 4729 scope.go:117] "RemoveContainer" containerID="9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.798082 4729 scope.go:117] "RemoveContainer" containerID="6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.823613 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.840294 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpws"] Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.841438 4729 scope.go:117] "RemoveContainer" containerID="354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.913799 4729 scope.go:117] "RemoveContainer" containerID="9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3" Jan 27 15:26:41 crc kubenswrapper[4729]: E0127 15:26:41.914671 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3\": container with ID starting with 9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3 not found: ID does not exist" containerID="9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.914736 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3"} err="failed to get container status \"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3\": rpc error: code = NotFound desc = could not find container \"9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3\": container with ID starting with 9b647b4e6625549def78c49161468bbbf0dff7d6dc62d54a1c7741d5f84f1de3 not found: ID does not exist" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.914778 4729 scope.go:117] "RemoveContainer" containerID="6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4" Jan 27 15:26:41 crc kubenswrapper[4729]: E0127 15:26:41.915860 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4\": container with ID starting with 6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4 not found: ID does not exist" containerID="6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.915956 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4"} err="failed to get container status \"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4\": rpc error: code = NotFound desc = could not find container \"6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4\": container with ID starting with 6a54d43aed2f379e63d74154fb6f5a562f51214446469841dd37c499901adcc4 not found: ID does not exist" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.916000 4729 scope.go:117] "RemoveContainer" containerID="354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23" Jan 27 15:26:41 crc kubenswrapper[4729]: E0127 15:26:41.916459 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23\": container with ID starting with 354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23 not found: ID does not exist" containerID="354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23" Jan 27 15:26:41 crc kubenswrapper[4729]: I0127 15:26:41.916495 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23"} err="failed to get container status \"354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23\": rpc error: code = NotFound desc = could not find container \"354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23\": container with ID starting with 354ea49b842084c3578e5d3dad045684c64200fa05ee737487d5b8fe52b2db23 not found: ID does not exist" Jan 27 15:26:42 crc kubenswrapper[4729]: I0127 15:26:42.067730 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" path="/var/lib/kubelet/pods/6420e0d0-e48e-4682-be55-453e7ef7996d/volumes" Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.655296 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.655857 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.655925 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.656925 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.657089 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307" gracePeriod=600 Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.919278 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307" exitCode=0 Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.919319 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307"} Jan 27 15:26:52 crc kubenswrapper[4729]: I0127 15:26:52.919352 4729 scope.go:117] "RemoveContainer" containerID="b6a9154b9974b14313c9ee24ab05aacac54cd52b73386843aed69cfc4e6ed4f7" Jan 27 15:26:53 crc kubenswrapper[4729]: I0127 15:26:53.932249 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42"} Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.501584 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:27:33 crc kubenswrapper[4729]: E0127 15:27:33.503553 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="extract-utilities" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.503572 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="extract-utilities" Jan 27 15:27:33 crc kubenswrapper[4729]: E0127 15:27:33.503584 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="extract-content" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.503590 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="extract-content" Jan 27 15:27:33 crc kubenswrapper[4729]: E0127 15:27:33.503612 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="registry-server" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.503623 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="registry-server" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.503975 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6420e0d0-e48e-4682-be55-453e7ef7996d" containerName="registry-server" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.506648 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.527417 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.592474 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.593010 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.593185 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j596j\" (UniqueName: \"kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.696271 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.696418 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.696467 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j596j\" (UniqueName: \"kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.697742 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.697774 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.748919 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j596j\" (UniqueName: \"kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j\") pod \"redhat-operators-z8x9p\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:33 crc kubenswrapper[4729]: I0127 15:27:33.841947 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:34 crc kubenswrapper[4729]: I0127 15:27:34.420631 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:27:34 crc kubenswrapper[4729]: W0127 15:27:34.432275 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod796fe2bc_57d2_4c0a_b178_f2ed305293c9.slice/crio-3efed8264ea100ee262adce4687f1e53489d96423813d594d6a4565178507989 WatchSource:0}: Error finding container 3efed8264ea100ee262adce4687f1e53489d96423813d594d6a4565178507989: Status 404 returned error can't find the container with id 3efed8264ea100ee262adce4687f1e53489d96423813d594d6a4565178507989 Jan 27 15:27:35 crc kubenswrapper[4729]: I0127 15:27:35.421256 4729 generic.go:334] "Generic (PLEG): container finished" podID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerID="61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45" exitCode=0 Jan 27 15:27:35 crc kubenswrapper[4729]: I0127 15:27:35.421317 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerDied","Data":"61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45"} Jan 27 15:27:35 crc kubenswrapper[4729]: I0127 15:27:35.421559 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerStarted","Data":"3efed8264ea100ee262adce4687f1e53489d96423813d594d6a4565178507989"} Jan 27 15:27:37 crc kubenswrapper[4729]: I0127 15:27:37.445972 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerStarted","Data":"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939"} Jan 27 15:27:45 crc kubenswrapper[4729]: I0127 15:27:45.537749 4729 generic.go:334] "Generic (PLEG): container finished" podID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerID="62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939" exitCode=0 Jan 27 15:27:45 crc kubenswrapper[4729]: I0127 15:27:45.537868 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerDied","Data":"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939"} Jan 27 15:27:46 crc kubenswrapper[4729]: I0127 15:27:46.551512 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerStarted","Data":"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33"} Jan 27 15:27:46 crc kubenswrapper[4729]: I0127 15:27:46.579551 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8x9p" podStartSLOduration=2.9730233249999998 podStartE2EDuration="13.579523489s" podCreationTimestamp="2026-01-27 15:27:33 +0000 UTC" firstStartedPulling="2026-01-27 15:27:35.423236339 +0000 UTC m=+4942.007427343" lastFinishedPulling="2026-01-27 15:27:46.029736503 +0000 UTC m=+4952.613927507" observedRunningTime="2026-01-27 15:27:46.570644297 +0000 UTC m=+4953.154835321" watchObservedRunningTime="2026-01-27 15:27:46.579523489 +0000 UTC m=+4953.163714513" Jan 27 15:27:53 crc kubenswrapper[4729]: I0127 15:27:53.842828 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:53 crc kubenswrapper[4729]: I0127 15:27:53.843314 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:27:54 crc kubenswrapper[4729]: I0127 15:27:54.892388 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8x9p" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" probeResult="failure" output=< Jan 27 15:27:54 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:27:54 crc kubenswrapper[4729]: > Jan 27 15:28:05 crc kubenswrapper[4729]: I0127 15:28:05.216193 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z8x9p" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" probeResult="failure" output=< Jan 27 15:28:05 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:28:05 crc kubenswrapper[4729]: > Jan 27 15:28:13 crc kubenswrapper[4729]: I0127 15:28:13.894769 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:28:13 crc kubenswrapper[4729]: I0127 15:28:13.955615 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:28:14 crc kubenswrapper[4729]: I0127 15:28:14.146042 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.047553 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z8x9p" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" containerID="cri-o://2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33" gracePeriod=2 Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.731626 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.790518 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities\") pod \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.790766 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j596j\" (UniqueName: \"kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j\") pod \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.791006 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content\") pod \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\" (UID: \"796fe2bc-57d2-4c0a-b178-f2ed305293c9\") " Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.791469 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities" (OuterVolumeSpecName: "utilities") pod "796fe2bc-57d2-4c0a-b178-f2ed305293c9" (UID: "796fe2bc-57d2-4c0a-b178-f2ed305293c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.791962 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.797187 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j" (OuterVolumeSpecName: "kube-api-access-j596j") pod "796fe2bc-57d2-4c0a-b178-f2ed305293c9" (UID: "796fe2bc-57d2-4c0a-b178-f2ed305293c9"). InnerVolumeSpecName "kube-api-access-j596j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.894352 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j596j\" (UniqueName: \"kubernetes.io/projected/796fe2bc-57d2-4c0a-b178-f2ed305293c9-kube-api-access-j596j\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.927080 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "796fe2bc-57d2-4c0a-b178-f2ed305293c9" (UID: "796fe2bc-57d2-4c0a-b178-f2ed305293c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:28:15 crc kubenswrapper[4729]: I0127 15:28:15.996641 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/796fe2bc-57d2-4c0a-b178-f2ed305293c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.061193 4729 generic.go:334] "Generic (PLEG): container finished" podID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerID="2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33" exitCode=0 Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.061290 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8x9p" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.069076 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerDied","Data":"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33"} Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.069362 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8x9p" event={"ID":"796fe2bc-57d2-4c0a-b178-f2ed305293c9","Type":"ContainerDied","Data":"3efed8264ea100ee262adce4687f1e53489d96423813d594d6a4565178507989"} Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.069420 4729 scope.go:117] "RemoveContainer" containerID="2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.111727 4729 scope.go:117] "RemoveContainer" containerID="62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.115130 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.126502 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z8x9p"] Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.150673 4729 scope.go:117] "RemoveContainer" containerID="61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.198788 4729 scope.go:117] "RemoveContainer" containerID="2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33" Jan 27 15:28:16 crc kubenswrapper[4729]: E0127 15:28:16.199302 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33\": container with ID starting with 2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33 not found: ID does not exist" containerID="2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.199410 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33"} err="failed to get container status \"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33\": rpc error: code = NotFound desc = could not find container \"2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33\": container with ID starting with 2623c37f719fde3f4fc605603bda9087b4d9b170aa5a7c3895da6d42f24ade33 not found: ID does not exist" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.199502 4729 scope.go:117] "RemoveContainer" containerID="62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939" Jan 27 15:28:16 crc kubenswrapper[4729]: E0127 15:28:16.199841 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939\": container with ID starting with 62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939 not found: ID does not exist" containerID="62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.199993 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939"} err="failed to get container status \"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939\": rpc error: code = NotFound desc = could not find container \"62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939\": container with ID starting with 62098046daf0a846439a8df8225668f18e6c25f8ae1d7aee695ec3801038b939 not found: ID does not exist" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.200034 4729 scope.go:117] "RemoveContainer" containerID="61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45" Jan 27 15:28:16 crc kubenswrapper[4729]: E0127 15:28:16.200398 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45\": container with ID starting with 61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45 not found: ID does not exist" containerID="61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45" Jan 27 15:28:16 crc kubenswrapper[4729]: I0127 15:28:16.200482 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45"} err="failed to get container status \"61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45\": rpc error: code = NotFound desc = could not find container \"61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45\": container with ID starting with 61f77bd9bc41ee4425fd05faef39052325e5c6686617637f9f5dba8f8b62ae45 not found: ID does not exist" Jan 27 15:28:18 crc kubenswrapper[4729]: I0127 15:28:18.066437 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" path="/var/lib/kubelet/pods/796fe2bc-57d2-4c0a-b178-f2ed305293c9/volumes" Jan 27 15:29:22 crc kubenswrapper[4729]: I0127 15:29:22.654783 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:29:22 crc kubenswrapper[4729]: I0127 15:29:22.656144 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:29:52 crc kubenswrapper[4729]: I0127 15:29:52.655832 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:29:52 crc kubenswrapper[4729]: I0127 15:29:52.657799 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.171737 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg"] Jan 27 15:30:00 crc kubenswrapper[4729]: E0127 15:30:00.173374 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.173391 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="extract-utilities" Jan 27 15:30:00 crc kubenswrapper[4729]: E0127 15:30:00.173405 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.173411 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="extract-content" Jan 27 15:30:00 crc kubenswrapper[4729]: E0127 15:30:00.173432 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.173438 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.175043 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="796fe2bc-57d2-4c0a-b178-f2ed305293c9" containerName="registry-server" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.176064 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.178506 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.183374 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.186640 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg"] Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.274791 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.274893 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.275070 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4c2\" (UniqueName: \"kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.377330 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.377381 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.377449 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj4c2\" (UniqueName: \"kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.378346 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.393565 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.403612 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj4c2\" (UniqueName: \"kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2\") pod \"collect-profiles-29492130-svkrg\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.516745 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:00 crc kubenswrapper[4729]: I0127 15:30:00.996556 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg"] Jan 27 15:30:01 crc kubenswrapper[4729]: I0127 15:30:01.221460 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" event={"ID":"a01eba2a-349a-4cda-99c3-2b658358a3ab","Type":"ContainerStarted","Data":"9d1801889932da513fbba9b22a7af9ea9007f3523ba8de0b2093fad7b6d5e918"} Jan 27 15:30:02 crc kubenswrapper[4729]: I0127 15:30:02.233828 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" event={"ID":"a01eba2a-349a-4cda-99c3-2b658358a3ab","Type":"ContainerStarted","Data":"b9e521344a5aa551ee7c166a7e7065b90565bb873e08760b41e4cefb70eda65e"} Jan 27 15:30:02 crc kubenswrapper[4729]: I0127 15:30:02.257247 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" podStartSLOduration=2.25722926 podStartE2EDuration="2.25722926s" podCreationTimestamp="2026-01-27 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:30:02.247427613 +0000 UTC m=+5088.831618627" watchObservedRunningTime="2026-01-27 15:30:02.25722926 +0000 UTC m=+5088.841420264" Jan 27 15:30:03 crc kubenswrapper[4729]: I0127 15:30:03.245447 4729 generic.go:334] "Generic (PLEG): container finished" podID="a01eba2a-349a-4cda-99c3-2b658358a3ab" containerID="b9e521344a5aa551ee7c166a7e7065b90565bb873e08760b41e4cefb70eda65e" exitCode=0 Jan 27 15:30:03 crc kubenswrapper[4729]: I0127 15:30:03.245511 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" event={"ID":"a01eba2a-349a-4cda-99c3-2b658358a3ab","Type":"ContainerDied","Data":"b9e521344a5aa551ee7c166a7e7065b90565bb873e08760b41e4cefb70eda65e"} Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.842523 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.943625 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj4c2\" (UniqueName: \"kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2\") pod \"a01eba2a-349a-4cda-99c3-2b658358a3ab\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.943854 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume\") pod \"a01eba2a-349a-4cda-99c3-2b658358a3ab\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.943963 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume\") pod \"a01eba2a-349a-4cda-99c3-2b658358a3ab\" (UID: \"a01eba2a-349a-4cda-99c3-2b658358a3ab\") " Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.944561 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume" (OuterVolumeSpecName: "config-volume") pod "a01eba2a-349a-4cda-99c3-2b658358a3ab" (UID: "a01eba2a-349a-4cda-99c3-2b658358a3ab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.944998 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a01eba2a-349a-4cda-99c3-2b658358a3ab-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.949816 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2" (OuterVolumeSpecName: "kube-api-access-cj4c2") pod "a01eba2a-349a-4cda-99c3-2b658358a3ab" (UID: "a01eba2a-349a-4cda-99c3-2b658358a3ab"). InnerVolumeSpecName "kube-api-access-cj4c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:30:04 crc kubenswrapper[4729]: I0127 15:30:04.950275 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a01eba2a-349a-4cda-99c3-2b658358a3ab" (UID: "a01eba2a-349a-4cda-99c3-2b658358a3ab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.050838 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj4c2\" (UniqueName: \"kubernetes.io/projected/a01eba2a-349a-4cda-99c3-2b658358a3ab-kube-api-access-cj4c2\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.051299 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a01eba2a-349a-4cda-99c3-2b658358a3ab-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.273130 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" event={"ID":"a01eba2a-349a-4cda-99c3-2b658358a3ab","Type":"ContainerDied","Data":"9d1801889932da513fbba9b22a7af9ea9007f3523ba8de0b2093fad7b6d5e918"} Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.273195 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1801889932da513fbba9b22a7af9ea9007f3523ba8de0b2093fad7b6d5e918" Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.273276 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg" Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.345246 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j"] Jan 27 15:30:05 crc kubenswrapper[4729]: I0127 15:30:05.365619 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492085-76c9j"] Jan 27 15:30:06 crc kubenswrapper[4729]: I0127 15:30:06.071136 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b861f842-f980-4152-abe9-41e22094537b" path="/var/lib/kubelet/pods/b861f842-f980-4152-abe9-41e22094537b/volumes" Jan 27 15:30:14 crc kubenswrapper[4729]: I0127 15:30:14.376692 4729 scope.go:117] "RemoveContainer" containerID="b09654177877401525865a2b8c329af3c746dec04de898e342e072b5ec40ce20" Jan 27 15:30:22 crc kubenswrapper[4729]: I0127 15:30:22.657229 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:30:22 crc kubenswrapper[4729]: I0127 15:30:22.657818 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:30:22 crc kubenswrapper[4729]: I0127 15:30:22.657903 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:30:22 crc kubenswrapper[4729]: I0127 15:30:22.658899 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:30:22 crc kubenswrapper[4729]: I0127 15:30:22.658958 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" gracePeriod=600 Jan 27 15:30:22 crc kubenswrapper[4729]: E0127 15:30:22.828645 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:30:23 crc kubenswrapper[4729]: I0127 15:30:23.476257 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" exitCode=0 Jan 27 15:30:23 crc kubenswrapper[4729]: I0127 15:30:23.476308 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42"} Jan 27 15:30:23 crc kubenswrapper[4729]: I0127 15:30:23.476348 4729 scope.go:117] "RemoveContainer" containerID="27874959f8ab55a0bfbedbd5d59ea270f6769205ac8dea168a8f3ae683e08307" Jan 27 15:30:23 crc kubenswrapper[4729]: I0127 15:30:23.477495 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:30:23 crc kubenswrapper[4729]: E0127 15:30:23.477937 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:30:38 crc kubenswrapper[4729]: I0127 15:30:38.051299 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:30:38 crc kubenswrapper[4729]: E0127 15:30:38.052126 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:30:49 crc kubenswrapper[4729]: I0127 15:30:49.052482 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:30:49 crc kubenswrapper[4729]: E0127 15:30:49.053667 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:01 crc kubenswrapper[4729]: I0127 15:31:01.051671 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:31:01 crc kubenswrapper[4729]: E0127 15:31:01.052560 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:13 crc kubenswrapper[4729]: I0127 15:31:13.051493 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:31:13 crc kubenswrapper[4729]: E0127 15:31:13.053615 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:28 crc kubenswrapper[4729]: I0127 15:31:28.055445 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:31:28 crc kubenswrapper[4729]: E0127 15:31:28.056432 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.528456 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:33 crc kubenswrapper[4729]: E0127 15:31:33.529753 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a01eba2a-349a-4cda-99c3-2b658358a3ab" containerName="collect-profiles" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.529773 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a01eba2a-349a-4cda-99c3-2b658358a3ab" containerName="collect-profiles" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.530180 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a01eba2a-349a-4cda-99c3-2b658358a3ab" containerName="collect-profiles" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.540334 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.548100 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.616336 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2qmn\" (UniqueName: \"kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.616543 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.616567 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.719093 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2qmn\" (UniqueName: \"kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.719224 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.719259 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.719990 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.720150 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.741111 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2qmn\" (UniqueName: \"kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn\") pod \"certified-operators-76s5b\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:33 crc kubenswrapper[4729]: I0127 15:31:33.864324 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:34 crc kubenswrapper[4729]: I0127 15:31:34.448208 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:34 crc kubenswrapper[4729]: W0127 15:31:34.452835 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cf1d4b9_9b53_4532_bea5_418b7d19c61f.slice/crio-d6194f83f46f6c2441081dbb0c89a75d7671d3dbdd539381d8000f766576b298 WatchSource:0}: Error finding container d6194f83f46f6c2441081dbb0c89a75d7671d3dbdd539381d8000f766576b298: Status 404 returned error can't find the container with id d6194f83f46f6c2441081dbb0c89a75d7671d3dbdd539381d8000f766576b298 Jan 27 15:31:35 crc kubenswrapper[4729]: I0127 15:31:35.269395 4729 generic.go:334] "Generic (PLEG): container finished" podID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerID="5c166ea60859cbcdf6f8c28f24af0b18c7cc473f51cf4388a1221b08d812d04e" exitCode=0 Jan 27 15:31:35 crc kubenswrapper[4729]: I0127 15:31:35.269655 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerDied","Data":"5c166ea60859cbcdf6f8c28f24af0b18c7cc473f51cf4388a1221b08d812d04e"} Jan 27 15:31:35 crc kubenswrapper[4729]: I0127 15:31:35.269681 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerStarted","Data":"d6194f83f46f6c2441081dbb0c89a75d7671d3dbdd539381d8000f766576b298"} Jan 27 15:31:35 crc kubenswrapper[4729]: I0127 15:31:35.271672 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:31:36 crc kubenswrapper[4729]: I0127 15:31:36.281948 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerStarted","Data":"0279159555ecf50f822c5c47dfc2770aa3c98a5f5c8aeb12b316bc6287f1cc8a"} Jan 27 15:31:39 crc kubenswrapper[4729]: I0127 15:31:39.320729 4729 generic.go:334] "Generic (PLEG): container finished" podID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerID="0279159555ecf50f822c5c47dfc2770aa3c98a5f5c8aeb12b316bc6287f1cc8a" exitCode=0 Jan 27 15:31:39 crc kubenswrapper[4729]: I0127 15:31:39.320803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerDied","Data":"0279159555ecf50f822c5c47dfc2770aa3c98a5f5c8aeb12b316bc6287f1cc8a"} Jan 27 15:31:41 crc kubenswrapper[4729]: I0127 15:31:41.391799 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerStarted","Data":"0c3ca707d558f37d40dc698c1c017a9701b675f799e4523a5a10d0a51000d55b"} Jan 27 15:31:41 crc kubenswrapper[4729]: I0127 15:31:41.424387 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-76s5b" podStartSLOduration=2.855020145 podStartE2EDuration="8.424361539s" podCreationTimestamp="2026-01-27 15:31:33 +0000 UTC" firstStartedPulling="2026-01-27 15:31:35.271477621 +0000 UTC m=+5181.855668625" lastFinishedPulling="2026-01-27 15:31:40.840819015 +0000 UTC m=+5187.425010019" observedRunningTime="2026-01-27 15:31:41.416606465 +0000 UTC m=+5188.000797469" watchObservedRunningTime="2026-01-27 15:31:41.424361539 +0000 UTC m=+5188.008552543" Jan 27 15:31:43 crc kubenswrapper[4729]: I0127 15:31:43.051899 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:31:43 crc kubenswrapper[4729]: E0127 15:31:43.052520 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:43 crc kubenswrapper[4729]: I0127 15:31:43.864581 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:43 crc kubenswrapper[4729]: I0127 15:31:43.865001 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:44 crc kubenswrapper[4729]: I0127 15:31:44.919408 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-76s5b" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="registry-server" probeResult="failure" output=< Jan 27 15:31:44 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:31:44 crc kubenswrapper[4729]: > Jan 27 15:31:53 crc kubenswrapper[4729]: I0127 15:31:53.932272 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:53 crc kubenswrapper[4729]: I0127 15:31:53.985859 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:54 crc kubenswrapper[4729]: I0127 15:31:54.180826 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:55 crc kubenswrapper[4729]: I0127 15:31:55.051011 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:31:55 crc kubenswrapper[4729]: E0127 15:31:55.051445 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:31:55 crc kubenswrapper[4729]: I0127 15:31:55.548093 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-76s5b" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="registry-server" containerID="cri-o://0c3ca707d558f37d40dc698c1c017a9701b675f799e4523a5a10d0a51000d55b" gracePeriod=2 Jan 27 15:31:56 crc kubenswrapper[4729]: I0127 15:31:56.559124 4729 generic.go:334] "Generic (PLEG): container finished" podID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerID="0c3ca707d558f37d40dc698c1c017a9701b675f799e4523a5a10d0a51000d55b" exitCode=0 Jan 27 15:31:56 crc kubenswrapper[4729]: I0127 15:31:56.559211 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerDied","Data":"0c3ca707d558f37d40dc698c1c017a9701b675f799e4523a5a10d0a51000d55b"} Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.047590 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.153644 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content\") pod \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.153761 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities\") pod \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.153999 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2qmn\" (UniqueName: \"kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn\") pod \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\" (UID: \"6cf1d4b9-9b53-4532-bea5-418b7d19c61f\") " Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.154712 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities" (OuterVolumeSpecName: "utilities") pod "6cf1d4b9-9b53-4532-bea5-418b7d19c61f" (UID: "6cf1d4b9-9b53-4532-bea5-418b7d19c61f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.155145 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.164243 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn" (OuterVolumeSpecName: "kube-api-access-d2qmn") pod "6cf1d4b9-9b53-4532-bea5-418b7d19c61f" (UID: "6cf1d4b9-9b53-4532-bea5-418b7d19c61f"). InnerVolumeSpecName "kube-api-access-d2qmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.212015 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cf1d4b9-9b53-4532-bea5-418b7d19c61f" (UID: "6cf1d4b9-9b53-4532-bea5-418b7d19c61f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.263172 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.263209 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2qmn\" (UniqueName: \"kubernetes.io/projected/6cf1d4b9-9b53-4532-bea5-418b7d19c61f-kube-api-access-d2qmn\") on node \"crc\" DevicePath \"\"" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.572414 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-76s5b" event={"ID":"6cf1d4b9-9b53-4532-bea5-418b7d19c61f","Type":"ContainerDied","Data":"d6194f83f46f6c2441081dbb0c89a75d7671d3dbdd539381d8000f766576b298"} Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.572748 4729 scope.go:117] "RemoveContainer" containerID="0c3ca707d558f37d40dc698c1c017a9701b675f799e4523a5a10d0a51000d55b" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.572471 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-76s5b" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.598301 4729 scope.go:117] "RemoveContainer" containerID="0279159555ecf50f822c5c47dfc2770aa3c98a5f5c8aeb12b316bc6287f1cc8a" Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.619143 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.654461 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-76s5b"] Jan 27 15:31:57 crc kubenswrapper[4729]: I0127 15:31:57.664150 4729 scope.go:117] "RemoveContainer" containerID="5c166ea60859cbcdf6f8c28f24af0b18c7cc473f51cf4388a1221b08d812d04e" Jan 27 15:31:58 crc kubenswrapper[4729]: I0127 15:31:58.064765 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" path="/var/lib/kubelet/pods/6cf1d4b9-9b53-4532-bea5-418b7d19c61f/volumes" Jan 27 15:32:10 crc kubenswrapper[4729]: I0127 15:32:10.051449 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:32:10 crc kubenswrapper[4729]: E0127 15:32:10.052533 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:32:22 crc kubenswrapper[4729]: I0127 15:32:22.051334 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:32:22 crc kubenswrapper[4729]: E0127 15:32:22.052178 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:32:35 crc kubenswrapper[4729]: I0127 15:32:35.051225 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:32:35 crc kubenswrapper[4729]: E0127 15:32:35.052068 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:32:49 crc kubenswrapper[4729]: I0127 15:32:49.052160 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:32:49 crc kubenswrapper[4729]: E0127 15:32:49.053189 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.566436 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:32:56 crc kubenswrapper[4729]: E0127 15:32:56.567264 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="extract-content" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.567278 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="extract-content" Jan 27 15:32:56 crc kubenswrapper[4729]: E0127 15:32:56.567300 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="extract-utilities" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.567306 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="extract-utilities" Jan 27 15:32:56 crc kubenswrapper[4729]: E0127 15:32:56.567319 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="registry-server" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.567325 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="registry-server" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.567561 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf1d4b9-9b53-4532-bea5-418b7d19c61f" containerName="registry-server" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.568457 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.571348 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.571634 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.572244 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.573226 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nbsgt" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.590963 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693283 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693337 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693420 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693453 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693468 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9kk\" (UniqueName: \"kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693525 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693709 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.693864 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.694004 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796116 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796151 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796238 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796270 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796283 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9kk\" (UniqueName: \"kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796311 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796381 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.796414 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.797699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.798658 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.799110 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.799525 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.799737 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.803131 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.805172 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.805783 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.823237 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9kk\" (UniqueName: \"kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.845934 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " pod="openstack/tempest-tests-tempest" Jan 27 15:32:56 crc kubenswrapper[4729]: I0127 15:32:56.895150 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:32:57 crc kubenswrapper[4729]: I0127 15:32:57.417148 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 27 15:32:58 crc kubenswrapper[4729]: I0127 15:32:58.333404 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"440fdd61-ad16-4ee7-bf64-2754db1c5db8","Type":"ContainerStarted","Data":"2c8e3bb7cc7912b4c4ac9b519a7a7d38a35fc081b0e9a01660dc302053d6d25a"} Jan 27 15:33:00 crc kubenswrapper[4729]: I0127 15:33:00.051619 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:33:00 crc kubenswrapper[4729]: E0127 15:33:00.052189 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:33:15 crc kubenswrapper[4729]: I0127 15:33:15.050864 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:33:15 crc kubenswrapper[4729]: E0127 15:33:15.051666 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:33:27 crc kubenswrapper[4729]: I0127 15:33:27.051688 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:33:27 crc kubenswrapper[4729]: E0127 15:33:27.052707 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:33:42 crc kubenswrapper[4729]: I0127 15:33:42.051612 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:33:42 crc kubenswrapper[4729]: E0127 15:33:42.052572 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:33:42 crc kubenswrapper[4729]: E0127 15:33:42.164437 4729 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 27 15:33:42 crc kubenswrapper[4729]: E0127 15:33:42.175411 4729 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn9kk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(440fdd61-ad16-4ee7-bf64-2754db1c5db8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 15:33:42 crc kubenswrapper[4729]: E0127 15:33:42.176619 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" Jan 27 15:33:42 crc kubenswrapper[4729]: E0127 15:33:42.869160 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" Jan 27 15:33:53 crc kubenswrapper[4729]: I0127 15:33:53.051516 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:33:53 crc kubenswrapper[4729]: E0127 15:33:53.052488 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:33:59 crc kubenswrapper[4729]: I0127 15:33:59.726494 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 27 15:34:04 crc kubenswrapper[4729]: I0127 15:34:04.117047 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"440fdd61-ad16-4ee7-bf64-2754db1c5db8","Type":"ContainerStarted","Data":"a51181e33ab61a5c6f36531d88077f30d4690d52faf4a207447026dec8d841a6"} Jan 27 15:34:04 crc kubenswrapper[4729]: I0127 15:34:04.154304 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.856730112 podStartE2EDuration="1m9.154278873s" podCreationTimestamp="2026-01-27 15:32:55 +0000 UTC" firstStartedPulling="2026-01-27 15:32:57.423529514 +0000 UTC m=+5264.007720518" lastFinishedPulling="2026-01-27 15:33:59.721078275 +0000 UTC m=+5326.305269279" observedRunningTime="2026-01-27 15:34:04.146762746 +0000 UTC m=+5330.730953750" watchObservedRunningTime="2026-01-27 15:34:04.154278873 +0000 UTC m=+5330.738469887" Jan 27 15:34:06 crc kubenswrapper[4729]: I0127 15:34:06.051091 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:34:06 crc kubenswrapper[4729]: E0127 15:34:06.052418 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:34:18 crc kubenswrapper[4729]: I0127 15:34:18.052192 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:34:18 crc kubenswrapper[4729]: E0127 15:34:18.053078 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:34:30 crc kubenswrapper[4729]: I0127 15:34:30.054978 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:34:30 crc kubenswrapper[4729]: E0127 15:34:30.055907 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:34:42 crc kubenswrapper[4729]: I0127 15:34:42.051504 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:34:42 crc kubenswrapper[4729]: E0127 15:34:42.052398 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:34:56 crc kubenswrapper[4729]: I0127 15:34:56.051326 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:34:56 crc kubenswrapper[4729]: E0127 15:34:56.052427 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:35:10 crc kubenswrapper[4729]: I0127 15:35:10.052887 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:35:10 crc kubenswrapper[4729]: E0127 15:35:10.055324 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:35:21 crc kubenswrapper[4729]: I0127 15:35:21.051915 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:35:21 crc kubenswrapper[4729]: E0127 15:35:21.052736 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:35:33 crc kubenswrapper[4729]: I0127 15:35:33.051212 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:35:34 crc kubenswrapper[4729]: I0127 15:35:34.200012 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754"} Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.588548 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.602293 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.628059 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.653054 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dm9s\" (UniqueName: \"kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.653242 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.653264 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.756764 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.756821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.757130 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dm9s\" (UniqueName: \"kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.764385 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.764383 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.789952 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dm9s\" (UniqueName: \"kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s\") pod \"community-operators-ncl9p\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:15 crc kubenswrapper[4729]: I0127 15:36:15.934354 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:18 crc kubenswrapper[4729]: I0127 15:36:18.194342 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:36:18 crc kubenswrapper[4729]: I0127 15:36:18.552782 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerStarted","Data":"fcbfd504ac9ce8b2a89cb923271f653a27607d4d44dd034af91c6631ae07bcf1"} Jan 27 15:36:19 crc kubenswrapper[4729]: I0127 15:36:19.567926 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerDied","Data":"f78479444095c58f7688b5e2e00ac1c0f547e1cea501d5ce54e926e88d7f23ee"} Jan 27 15:36:19 crc kubenswrapper[4729]: I0127 15:36:19.571180 4729 generic.go:334] "Generic (PLEG): container finished" podID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerID="f78479444095c58f7688b5e2e00ac1c0f547e1cea501d5ce54e926e88d7f23ee" exitCode=0 Jan 27 15:36:21 crc kubenswrapper[4729]: I0127 15:36:21.611820 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerStarted","Data":"8310f6d99a5acef23c7440dc4b3a5d27abbc616ca4438df9ef932d8ae66ad250"} Jan 27 15:36:26 crc kubenswrapper[4729]: I0127 15:36:26.673555 4729 generic.go:334] "Generic (PLEG): container finished" podID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerID="8310f6d99a5acef23c7440dc4b3a5d27abbc616ca4438df9ef932d8ae66ad250" exitCode=0 Jan 27 15:36:26 crc kubenswrapper[4729]: I0127 15:36:26.674227 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerDied","Data":"8310f6d99a5acef23c7440dc4b3a5d27abbc616ca4438df9ef932d8ae66ad250"} Jan 27 15:36:27 crc kubenswrapper[4729]: I0127 15:36:27.688768 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerStarted","Data":"5ae40715314bdb9830ada23a6bdd368f01cf5e4196291ad7bedb7bcfff92a799"} Jan 27 15:36:35 crc kubenswrapper[4729]: I0127 15:36:35.935790 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:35 crc kubenswrapper[4729]: I0127 15:36:35.936378 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:37 crc kubenswrapper[4729]: I0127 15:36:37.030893 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ncl9p" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" probeResult="failure" output=< Jan 27 15:36:37 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:36:37 crc kubenswrapper[4729]: > Jan 27 15:36:47 crc kubenswrapper[4729]: I0127 15:36:47.014175 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ncl9p" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" probeResult="failure" output=< Jan 27 15:36:47 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:36:47 crc kubenswrapper[4729]: > Jan 27 15:36:52 crc kubenswrapper[4729]: I0127 15:36:52.268069 4729 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.214347039s: [/var/lib/containers/storage/overlay/fa3b3cfa1c46355e299eac6698bd918cd2617012510480397c7554ccdd5f21f6/diff /var/log/pods/openstack_openstackclient_0b4b3ce4-58fb-430f-8465-ca0a501a6aba/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 27 15:36:56 crc kubenswrapper[4729]: I0127 15:36:56.125508 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:56 crc kubenswrapper[4729]: I0127 15:36:56.179421 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:36:56 crc kubenswrapper[4729]: I0127 15:36:56.524810 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ncl9p" podStartSLOduration=33.999894307 podStartE2EDuration="41.499835648s" podCreationTimestamp="2026-01-27 15:36:15 +0000 UTC" firstStartedPulling="2026-01-27 15:36:19.576665368 +0000 UTC m=+5466.160856362" lastFinishedPulling="2026-01-27 15:36:27.076606699 +0000 UTC m=+5473.660797703" observedRunningTime="2026-01-27 15:36:27.719488343 +0000 UTC m=+5474.303679367" watchObservedRunningTime="2026-01-27 15:36:56.499835648 +0000 UTC m=+5503.084026652" Jan 27 15:36:56 crc kubenswrapper[4729]: I0127 15:36:56.634624 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:36:58 crc kubenswrapper[4729]: I0127 15:36:58.101976 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ncl9p" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" containerID="cri-o://5ae40715314bdb9830ada23a6bdd368f01cf5e4196291ad7bedb7bcfff92a799" gracePeriod=2 Jan 27 15:36:59 crc kubenswrapper[4729]: I0127 15:36:59.076069 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerDied","Data":"5ae40715314bdb9830ada23a6bdd368f01cf5e4196291ad7bedb7bcfff92a799"} Jan 27 15:36:59 crc kubenswrapper[4729]: I0127 15:36:59.075982 4729 generic.go:334] "Generic (PLEG): container finished" podID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerID="5ae40715314bdb9830ada23a6bdd368f01cf5e4196291ad7bedb7bcfff92a799" exitCode=0 Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.112242 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncl9p" event={"ID":"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e","Type":"ContainerDied","Data":"fcbfd504ac9ce8b2a89cb923271f653a27607d4d44dd034af91c6631ae07bcf1"} Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.116108 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbfd504ac9ce8b2a89cb923271f653a27607d4d44dd034af91c6631ae07bcf1" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.148303 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.322328 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dm9s\" (UniqueName: \"kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s\") pod \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.322883 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities\") pod \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.323095 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content\") pod \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\" (UID: \"a184dcbf-fdb4-4e5f-b73c-af2f16309f7e\") " Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.357726 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities" (OuterVolumeSpecName: "utilities") pod "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" (UID: "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.436371 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.454835 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s" (OuterVolumeSpecName: "kube-api-access-4dm9s") pod "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" (UID: "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e"). InnerVolumeSpecName "kube-api-access-4dm9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.538746 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dm9s\" (UniqueName: \"kubernetes.io/projected/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-kube-api-access-4dm9s\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.732024 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" (UID: "a184dcbf-fdb4-4e5f-b73c-af2f16309f7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:37:01 crc kubenswrapper[4729]: I0127 15:37:01.742953 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:37:02 crc kubenswrapper[4729]: I0127 15:37:02.121656 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncl9p" Jan 27 15:37:02 crc kubenswrapper[4729]: I0127 15:37:02.196515 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:37:02 crc kubenswrapper[4729]: I0127 15:37:02.243155 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ncl9p"] Jan 27 15:37:04 crc kubenswrapper[4729]: I0127 15:37:04.064917 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" path="/var/lib/kubelet/pods/a184dcbf-fdb4-4e5f-b73c-af2f16309f7e/volumes" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.607860 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:37:19 crc kubenswrapper[4729]: E0127 15:37:19.640931 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.641032 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" Jan 27 15:37:19 crc kubenswrapper[4729]: E0127 15:37:19.646725 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="extract-content" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.646774 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="extract-content" Jan 27 15:37:19 crc kubenswrapper[4729]: E0127 15:37:19.646854 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="extract-utilities" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.646865 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="extract-utilities" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.652365 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a184dcbf-fdb4-4e5f-b73c-af2f16309f7e" containerName="registry-server" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.675294 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.826744 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck9s5\" (UniqueName: \"kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.827201 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.827248 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.930157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck9s5\" (UniqueName: \"kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:19 crc kubenswrapper[4729]: I0127 15:37:19.930225 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:19.930542 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:20.009349 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:20.313175 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:20.313825 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:20.437223 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck9s5\" (UniqueName: \"kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5\") pod \"redhat-marketplace-bpvzg\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:20 crc kubenswrapper[4729]: I0127 15:37:20.612971 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:24 crc kubenswrapper[4729]: I0127 15:37:24.755919 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:37:25 crc kubenswrapper[4729]: I0127 15:37:25.424397 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerDied","Data":"051469eb98ad94f9b71809d840a3ce1ec7f7c5d5fc30d7e89324d2df1aa90548"} Jan 27 15:37:25 crc kubenswrapper[4729]: I0127 15:37:25.424899 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerID="051469eb98ad94f9b71809d840a3ce1ec7f7c5d5fc30d7e89324d2df1aa90548" exitCode=0 Jan 27 15:37:25 crc kubenswrapper[4729]: I0127 15:37:25.425751 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerStarted","Data":"569a28ea24e311005c3ecc5af95798adc7cb0bc883c2d915cff476c9879c551a"} Jan 27 15:37:25 crc kubenswrapper[4729]: I0127 15:37:25.493912 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:37:27 crc kubenswrapper[4729]: I0127 15:37:27.448940 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerStarted","Data":"42dea088bd1475097425862b6884e53deead9e80e25b9cda8a73bdda02f3899b"} Jan 27 15:37:30 crc kubenswrapper[4729]: I0127 15:37:30.496008 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerID="42dea088bd1475097425862b6884e53deead9e80e25b9cda8a73bdda02f3899b" exitCode=0 Jan 27 15:37:30 crc kubenswrapper[4729]: I0127 15:37:30.496091 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerDied","Data":"42dea088bd1475097425862b6884e53deead9e80e25b9cda8a73bdda02f3899b"} Jan 27 15:37:32 crc kubenswrapper[4729]: I0127 15:37:32.519256 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerStarted","Data":"c1139ec87182bed02c5568f1e3cd06912a6a4d509e89a6d7e992e263ff4c8295"} Jan 27 15:37:32 crc kubenswrapper[4729]: I0127 15:37:32.571172 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bpvzg" podStartSLOduration=7.731708336 podStartE2EDuration="13.568815696s" podCreationTimestamp="2026-01-27 15:37:19 +0000 UTC" firstStartedPulling="2026-01-27 15:37:25.42823571 +0000 UTC m=+5532.012426714" lastFinishedPulling="2026-01-27 15:37:31.26534308 +0000 UTC m=+5537.849534074" observedRunningTime="2026-01-27 15:37:32.5579614 +0000 UTC m=+5539.142152404" watchObservedRunningTime="2026-01-27 15:37:32.568815696 +0000 UTC m=+5539.153006690" Jan 27 15:37:40 crc kubenswrapper[4729]: I0127 15:37:40.614355 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:40 crc kubenswrapper[4729]: I0127 15:37:40.614968 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:37:41 crc kubenswrapper[4729]: I0127 15:37:41.693286 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bpvzg" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" probeResult="failure" output=< Jan 27 15:37:41 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:37:41 crc kubenswrapper[4729]: > Jan 27 15:37:51 crc kubenswrapper[4729]: I0127 15:37:51.666218 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bpvzg" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" probeResult="failure" output=< Jan 27 15:37:51 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:37:51 crc kubenswrapper[4729]: > Jan 27 15:37:52 crc kubenswrapper[4729]: I0127 15:37:52.656299 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:37:52 crc kubenswrapper[4729]: I0127 15:37:52.657312 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:38:00 crc kubenswrapper[4729]: I0127 15:38:00.770986 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:38:00 crc kubenswrapper[4729]: I0127 15:38:00.846865 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:38:01 crc kubenswrapper[4729]: I0127 15:38:01.277716 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:38:01 crc kubenswrapper[4729]: I0127 15:38:01.864865 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bpvzg" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" containerID="cri-o://c1139ec87182bed02c5568f1e3cd06912a6a4d509e89a6d7e992e263ff4c8295" gracePeriod=2 Jan 27 15:38:02 crc kubenswrapper[4729]: I0127 15:38:02.884207 4729 generic.go:334] "Generic (PLEG): container finished" podID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerID="c1139ec87182bed02c5568f1e3cd06912a6a4d509e89a6d7e992e263ff4c8295" exitCode=0 Jan 27 15:38:02 crc kubenswrapper[4729]: I0127 15:38:02.885688 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerDied","Data":"c1139ec87182bed02c5568f1e3cd06912a6a4d509e89a6d7e992e263ff4c8295"} Jan 27 15:38:03 crc kubenswrapper[4729]: I0127 15:38:03.843441 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:38:03 crc kubenswrapper[4729]: I0127 15:38:03.917214 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bpvzg" event={"ID":"f4c77123-2bbb-417b-81f3-cfec7d4baa50","Type":"ContainerDied","Data":"569a28ea24e311005c3ecc5af95798adc7cb0bc883c2d915cff476c9879c551a"} Jan 27 15:38:03 crc kubenswrapper[4729]: I0127 15:38:03.917895 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bpvzg" Jan 27 15:38:03 crc kubenswrapper[4729]: I0127 15:38:03.919916 4729 scope.go:117] "RemoveContainer" containerID="c1139ec87182bed02c5568f1e3cd06912a6a4d509e89a6d7e992e263ff4c8295" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.035355 4729 scope.go:117] "RemoveContainer" containerID="42dea088bd1475097425862b6884e53deead9e80e25b9cda8a73bdda02f3899b" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.036633 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content\") pod \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.036831 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck9s5\" (UniqueName: \"kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5\") pod \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.036950 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities\") pod \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\" (UID: \"f4c77123-2bbb-417b-81f3-cfec7d4baa50\") " Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.052715 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities" (OuterVolumeSpecName: "utilities") pod "f4c77123-2bbb-417b-81f3-cfec7d4baa50" (UID: "f4c77123-2bbb-417b-81f3-cfec7d4baa50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.114303 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5" (OuterVolumeSpecName: "kube-api-access-ck9s5") pod "f4c77123-2bbb-417b-81f3-cfec7d4baa50" (UID: "f4c77123-2bbb-417b-81f3-cfec7d4baa50"). InnerVolumeSpecName "kube-api-access-ck9s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.142774 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck9s5\" (UniqueName: \"kubernetes.io/projected/f4c77123-2bbb-417b-81f3-cfec7d4baa50-kube-api-access-ck9s5\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.142836 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.166436 4729 scope.go:117] "RemoveContainer" containerID="051469eb98ad94f9b71809d840a3ce1ec7f7c5d5fc30d7e89324d2df1aa90548" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.199463 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4c77123-2bbb-417b-81f3-cfec7d4baa50" (UID: "f4c77123-2bbb-417b-81f3-cfec7d4baa50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.244906 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4c77123-2bbb-417b-81f3-cfec7d4baa50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.353915 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:38:04 crc kubenswrapper[4729]: I0127 15:38:04.369225 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bpvzg"] Jan 27 15:38:06 crc kubenswrapper[4729]: I0127 15:38:06.072008 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" path="/var/lib/kubelet/pods/f4c77123-2bbb-417b-81f3-cfec7d4baa50/volumes" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.562922 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:38:16 crc kubenswrapper[4729]: E0127 15:38:16.565099 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="extract-utilities" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.565144 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="extract-utilities" Jan 27 15:38:16 crc kubenswrapper[4729]: E0127 15:38:16.565162 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="extract-content" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.565171 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="extract-content" Jan 27 15:38:16 crc kubenswrapper[4729]: E0127 15:38:16.565211 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.565218 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.567413 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c77123-2bbb-417b-81f3-cfec7d4baa50" containerName="registry-server" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.573451 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.675137 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.720379 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2dw\" (UniqueName: \"kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.721114 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.721691 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.826260 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.826455 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2dw\" (UniqueName: \"kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.826554 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.981147 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.988699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:16 crc kubenswrapper[4729]: I0127 15:38:16.995787 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2dw\" (UniqueName: \"kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw\") pod \"redhat-operators-sqs6r\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:17 crc kubenswrapper[4729]: I0127 15:38:17.213441 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:18 crc kubenswrapper[4729]: I0127 15:38:18.772412 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:38:19 crc kubenswrapper[4729]: I0127 15:38:19.092466 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerStarted","Data":"046402c9fa3afed2c1eb15fac36fa86191c65d0427259b841703a7ad991bb15f"} Jan 27 15:38:20 crc kubenswrapper[4729]: I0127 15:38:20.104507 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerDied","Data":"7f54babb7b5759c2c9f224c8fac7da89ccf8139f5f9495eb605ebb04644b151c"} Jan 27 15:38:20 crc kubenswrapper[4729]: I0127 15:38:20.107249 4729 generic.go:334] "Generic (PLEG): container finished" podID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerID="7f54babb7b5759c2c9f224c8fac7da89ccf8139f5f9495eb605ebb04644b151c" exitCode=0 Jan 27 15:38:22 crc kubenswrapper[4729]: I0127 15:38:22.139088 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerStarted","Data":"03222d63bce124640e281fd4415e9e4f763b9298a6f8645942ff7cad4517f543"} Jan 27 15:38:22 crc kubenswrapper[4729]: I0127 15:38:22.658638 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:38:22 crc kubenswrapper[4729]: I0127 15:38:22.664390 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:38:32 crc kubenswrapper[4729]: I0127 15:38:32.253593 4729 generic.go:334] "Generic (PLEG): container finished" podID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerID="03222d63bce124640e281fd4415e9e4f763b9298a6f8645942ff7cad4517f543" exitCode=0 Jan 27 15:38:32 crc kubenswrapper[4729]: I0127 15:38:32.253682 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerDied","Data":"03222d63bce124640e281fd4415e9e4f763b9298a6f8645942ff7cad4517f543"} Jan 27 15:38:34 crc kubenswrapper[4729]: I0127 15:38:34.290416 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerStarted","Data":"1301cafd20ba477c2f56c8b993384c77425791bd1dcf7dbc2a624a49504ed71e"} Jan 27 15:38:34 crc kubenswrapper[4729]: I0127 15:38:34.315798 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sqs6r" podStartSLOduration=5.591800748 podStartE2EDuration="18.313949547s" podCreationTimestamp="2026-01-27 15:38:16 +0000 UTC" firstStartedPulling="2026-01-27 15:38:20.110393825 +0000 UTC m=+5586.694584829" lastFinishedPulling="2026-01-27 15:38:32.832542624 +0000 UTC m=+5599.416733628" observedRunningTime="2026-01-27 15:38:34.312321214 +0000 UTC m=+5600.896512238" watchObservedRunningTime="2026-01-27 15:38:34.313949547 +0000 UTC m=+5600.898140551" Jan 27 15:38:37 crc kubenswrapper[4729]: I0127 15:38:37.214951 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:37 crc kubenswrapper[4729]: I0127 15:38:37.215255 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:38:38 crc kubenswrapper[4729]: I0127 15:38:38.281628 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:38:38 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:38:38 crc kubenswrapper[4729]: > Jan 27 15:38:48 crc kubenswrapper[4729]: I0127 15:38:48.277283 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:38:48 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:38:48 crc kubenswrapper[4729]: > Jan 27 15:38:52 crc kubenswrapper[4729]: I0127 15:38:52.654839 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:38:52 crc kubenswrapper[4729]: I0127 15:38:52.655391 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:38:52 crc kubenswrapper[4729]: I0127 15:38:52.655448 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:38:52 crc kubenswrapper[4729]: I0127 15:38:52.657912 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:38:52 crc kubenswrapper[4729]: I0127 15:38:52.658793 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754" gracePeriod=600 Jan 27 15:38:53 crc kubenswrapper[4729]: I0127 15:38:53.527928 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754" exitCode=0 Jan 27 15:38:53 crc kubenswrapper[4729]: I0127 15:38:53.527994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754"} Jan 27 15:38:53 crc kubenswrapper[4729]: I0127 15:38:53.528616 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e"} Jan 27 15:38:53 crc kubenswrapper[4729]: I0127 15:38:53.563277 4729 scope.go:117] "RemoveContainer" containerID="86bdfc5972d052fe78c11e9f9886a65fdb580f05846230ca75c7795d8d00ae42" Jan 27 15:38:58 crc kubenswrapper[4729]: I0127 15:38:58.409160 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:38:58 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:38:58 crc kubenswrapper[4729]: > Jan 27 15:39:08 crc kubenswrapper[4729]: I0127 15:39:08.279598 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:08 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:08 crc kubenswrapper[4729]: > Jan 27 15:39:18 crc kubenswrapper[4729]: I0127 15:39:18.270645 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:18 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:18 crc kubenswrapper[4729]: > Jan 27 15:39:28 crc kubenswrapper[4729]: I0127 15:39:28.290724 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:28 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:28 crc kubenswrapper[4729]: > Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:33.525029 4729 patch_prober.go:28] interesting pod/metrics-server-85fb57964d-552jf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:33.525567 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-85fb57964d-552jf" podUID="a04e23f6-2034-4b06-9772-3b8ae9a3afa0" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:33.858578 4729 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5975c77b68-sdbrg container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:33.858642 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5975c77b68-sdbrg" podUID="cf09e55d-e675-4bbe-aca3-853b9bc46cbc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:34.881779 4729 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:34.891138 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="cc44c481-9e30-42f7-883b-209184e04fba" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:34.902017 4729 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.540220359s: [/var/lib/containers/storage/overlay/afb2bec6539e6dc9d3eb535094433567d2addb95e6febc472475f3838e0003be/diff /var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/cinder-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:34.905527 4729 trace.go:236] Trace[323004069]: "Calculate volume metrics of metrics-client-ca for pod openshift-monitoring/kube-state-metrics-777cb5bd5d-xkng8" (27-Jan-2026 15:39:33.503) (total time: 1384ms): Jan 27 15:39:34 crc kubenswrapper[4729]: Trace[323004069]: [1.384227313s] [1.384227313s] END Jan 27 15:39:34 crc kubenswrapper[4729]: I0127 15:39:34.918698 4729 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.572192674s: [/var/lib/containers/storage/overlay/a5362769ad42b093a51a288b501d965f5bdea323a854644944e7e28660169d4c/diff /var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/probe/0.log]; will not log again for this container unless duration exceeds 2s Jan 27 15:39:38 crc kubenswrapper[4729]: I0127 15:39:38.888017 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:38 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:38 crc kubenswrapper[4729]: > Jan 27 15:39:48 crc kubenswrapper[4729]: I0127 15:39:48.280014 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:48 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:48 crc kubenswrapper[4729]: > Jan 27 15:39:58 crc kubenswrapper[4729]: I0127 15:39:58.689934 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:39:58 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:39:58 crc kubenswrapper[4729]: > Jan 27 15:40:07 crc kubenswrapper[4729]: I0127 15:40:07.448258 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:40:07 crc kubenswrapper[4729]: I0127 15:40:07.519169 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:40:07 crc kubenswrapper[4729]: I0127 15:40:07.724488 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:40:09 crc kubenswrapper[4729]: I0127 15:40:09.347542 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sqs6r" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" containerID="cri-o://1301cafd20ba477c2f56c8b993384c77425791bd1dcf7dbc2a624a49504ed71e" gracePeriod=2 Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.361261 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerDied","Data":"1301cafd20ba477c2f56c8b993384c77425791bd1dcf7dbc2a624a49504ed71e"} Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.361975 4729 generic.go:334] "Generic (PLEG): container finished" podID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerID="1301cafd20ba477c2f56c8b993384c77425791bd1dcf7dbc2a624a49504ed71e" exitCode=0 Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.803382 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.901071 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m2dw\" (UniqueName: \"kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw\") pod \"73226073-0dbb-463f-9c0c-f616c1d878cc\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.901321 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content\") pod \"73226073-0dbb-463f-9c0c-f616c1d878cc\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.901490 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities\") pod \"73226073-0dbb-463f-9c0c-f616c1d878cc\" (UID: \"73226073-0dbb-463f-9c0c-f616c1d878cc\") " Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.914455 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities" (OuterVolumeSpecName: "utilities") pod "73226073-0dbb-463f-9c0c-f616c1d878cc" (UID: "73226073-0dbb-463f-9c0c-f616c1d878cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:40:10 crc kubenswrapper[4729]: I0127 15:40:10.956236 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw" (OuterVolumeSpecName: "kube-api-access-7m2dw") pod "73226073-0dbb-463f-9c0c-f616c1d878cc" (UID: "73226073-0dbb-463f-9c0c-f616c1d878cc"). InnerVolumeSpecName "kube-api-access-7m2dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.008545 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.008632 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m2dw\" (UniqueName: \"kubernetes.io/projected/73226073-0dbb-463f-9c0c-f616c1d878cc-kube-api-access-7m2dw\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.207899 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73226073-0dbb-463f-9c0c-f616c1d878cc" (UID: "73226073-0dbb-463f-9c0c-f616c1d878cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.215292 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73226073-0dbb-463f-9c0c-f616c1d878cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.376708 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqs6r" event={"ID":"73226073-0dbb-463f-9c0c-f616c1d878cc","Type":"ContainerDied","Data":"046402c9fa3afed2c1eb15fac36fa86191c65d0427259b841703a7ad991bb15f"} Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.377996 4729 scope.go:117] "RemoveContainer" containerID="1301cafd20ba477c2f56c8b993384c77425791bd1dcf7dbc2a624a49504ed71e" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.378158 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqs6r" Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.420820 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.442437 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sqs6r"] Jan 27 15:40:11 crc kubenswrapper[4729]: I0127 15:40:11.448866 4729 scope.go:117] "RemoveContainer" containerID="03222d63bce124640e281fd4415e9e4f763b9298a6f8645942ff7cad4517f543" Jan 27 15:40:12 crc kubenswrapper[4729]: I0127 15:40:12.062075 4729 scope.go:117] "RemoveContainer" containerID="7f54babb7b5759c2c9f224c8fac7da89ccf8139f5f9495eb605ebb04644b151c" Jan 27 15:40:12 crc kubenswrapper[4729]: I0127 15:40:12.072743 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" path="/var/lib/kubelet/pods/73226073-0dbb-463f-9c0c-f616c1d878cc/volumes" Jan 27 15:41:22 crc kubenswrapper[4729]: I0127 15:41:22.655462 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:22 crc kubenswrapper[4729]: I0127 15:41:22.656474 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:41:52 crc kubenswrapper[4729]: I0127 15:41:52.655040 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:41:52 crc kubenswrapper[4729]: I0127 15:41:52.655540 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.655725 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.656284 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.656337 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.657728 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.657797 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" gracePeriod=600 Jan 27 15:42:22 crc kubenswrapper[4729]: E0127 15:42:22.844183 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.854372 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" exitCode=0 Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.854571 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e"} Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.854671 4729 scope.go:117] "RemoveContainer" containerID="18f5ce9911bb43727b936d4bb39e570589dcf1d163dc37c05478aab0b5adb754" Jan 27 15:42:22 crc kubenswrapper[4729]: I0127 15:42:22.855732 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:42:22 crc kubenswrapper[4729]: E0127 15:42:22.856129 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:42:22 crc kubenswrapper[4729]: E0127 15:42:22.947777 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8919c7c3_b36c_4bf1_8aed_355b818721a4.slice/crio-30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8919c7c3_b36c_4bf1_8aed_355b818721a4.slice/crio-conmon-30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e.scope\": RecentStats: unable to find data in memory cache]" Jan 27 15:42:37 crc kubenswrapper[4729]: I0127 15:42:37.051056 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:42:37 crc kubenswrapper[4729]: E0127 15:42:37.051787 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:42:49 crc kubenswrapper[4729]: I0127 15:42:49.051676 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:42:49 crc kubenswrapper[4729]: E0127 15:42:49.052426 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.388346 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:42:55 crc kubenswrapper[4729]: E0127 15:42:55.396344 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="extract-content" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.396378 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="extract-content" Jan 27 15:42:55 crc kubenswrapper[4729]: E0127 15:42:55.396416 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.396423 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" Jan 27 15:42:55 crc kubenswrapper[4729]: E0127 15:42:55.396434 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="extract-utilities" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.396441 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="extract-utilities" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.397943 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="73226073-0dbb-463f-9c0c-f616c1d878cc" containerName="registry-server" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.408364 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.484112 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.486314 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9l8\" (UniqueName: \"kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.486515 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.487019 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.590922 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9l8\" (UniqueName: \"kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.591019 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.591234 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.597049 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.597909 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.625493 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9l8\" (UniqueName: \"kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8\") pod \"certified-operators-pd8sq\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:55 crc kubenswrapper[4729]: I0127 15:42:55.737064 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:42:57 crc kubenswrapper[4729]: I0127 15:42:57.063597 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:42:58 crc kubenswrapper[4729]: I0127 15:42:58.222762 4729 generic.go:334] "Generic (PLEG): container finished" podID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerID="2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271" exitCode=0 Jan 27 15:42:58 crc kubenswrapper[4729]: I0127 15:42:58.222859 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerDied","Data":"2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271"} Jan 27 15:42:58 crc kubenswrapper[4729]: I0127 15:42:58.223468 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerStarted","Data":"29aa026175955a1f74a1be58365b79e4beaed1aacc6d553792a73ed548c6fd5b"} Jan 27 15:42:58 crc kubenswrapper[4729]: I0127 15:42:58.229391 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:43:00 crc kubenswrapper[4729]: I0127 15:43:00.245236 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerStarted","Data":"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362"} Jan 27 15:43:01 crc kubenswrapper[4729]: I0127 15:43:01.051108 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:43:01 crc kubenswrapper[4729]: E0127 15:43:01.051919 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:43:02 crc kubenswrapper[4729]: I0127 15:43:02.267566 4729 generic.go:334] "Generic (PLEG): container finished" podID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerID="f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362" exitCode=0 Jan 27 15:43:02 crc kubenswrapper[4729]: I0127 15:43:02.267624 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerDied","Data":"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362"} Jan 27 15:43:03 crc kubenswrapper[4729]: I0127 15:43:03.284270 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerStarted","Data":"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2"} Jan 27 15:43:03 crc kubenswrapper[4729]: I0127 15:43:03.304475 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pd8sq" podStartSLOduration=3.7209564139999998 podStartE2EDuration="8.30341475s" podCreationTimestamp="2026-01-27 15:42:55 +0000 UTC" firstStartedPulling="2026-01-27 15:42:58.225183431 +0000 UTC m=+5864.809374435" lastFinishedPulling="2026-01-27 15:43:02.807641767 +0000 UTC m=+5869.391832771" observedRunningTime="2026-01-27 15:43:03.299768614 +0000 UTC m=+5869.883959618" watchObservedRunningTime="2026-01-27 15:43:03.30341475 +0000 UTC m=+5869.887605754" Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.581368 4729 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-p5mb2 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.581726 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" podUID="a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.581916 4729 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-p5mb2 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.19:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.581948 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-p5mb2" podUID="a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.19:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.737819 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:05 crc kubenswrapper[4729]: I0127 15:43:05.738220 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:06 crc kubenswrapper[4729]: I0127 15:43:06.851592 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pd8sq" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" probeResult="failure" output=< Jan 27 15:43:06 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:43:06 crc kubenswrapper[4729]: > Jan 27 15:43:06 crc kubenswrapper[4729]: I0127 15:43:06.853056 4729 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.805544386s: [/var/lib/containers/storage/overlay/c6f6bc74485162ea9b5959c612e97f4fa2f8e6a36bc82baaeb8a4ff3d19360b7/diff /var/log/pods/minio-dev_minio_ee8a7a98-2cea-4e17-9754-2505d70ca626/minio/0.log]; will not log again for this container unless duration exceeds 2s Jan 27 15:43:06 crc kubenswrapper[4729]: I0127 15:43:06.856047 4729 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-krhqs container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 15:43:06 crc kubenswrapper[4729]: I0127 15:43:06.856104 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-krhqs" podUID="849503fd-ce3e-42e9-bae7-596c510d2b8b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 27 15:43:12 crc kubenswrapper[4729]: I0127 15:43:12.051472 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:43:12 crc kubenswrapper[4729]: E0127 15:43:12.053016 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:43:15 crc kubenswrapper[4729]: I0127 15:43:15.915935 4729 scope.go:117] "RemoveContainer" containerID="5ae40715314bdb9830ada23a6bdd368f01cf5e4196291ad7bedb7bcfff92a799" Jan 27 15:43:15 crc kubenswrapper[4729]: I0127 15:43:15.953431 4729 scope.go:117] "RemoveContainer" containerID="f78479444095c58f7688b5e2e00ac1c0f547e1cea501d5ce54e926e88d7f23ee" Jan 27 15:43:15 crc kubenswrapper[4729]: I0127 15:43:15.988107 4729 scope.go:117] "RemoveContainer" containerID="8310f6d99a5acef23c7440dc4b3a5d27abbc616ca4438df9ef932d8ae66ad250" Jan 27 15:43:16 crc kubenswrapper[4729]: I0127 15:43:16.790552 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pd8sq" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" probeResult="failure" output=< Jan 27 15:43:16 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:43:16 crc kubenswrapper[4729]: > Jan 27 15:43:26 crc kubenswrapper[4729]: I0127 15:43:26.805930 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pd8sq" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" probeResult="failure" output=< Jan 27 15:43:26 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:43:26 crc kubenswrapper[4729]: > Jan 27 15:43:27 crc kubenswrapper[4729]: I0127 15:43:27.050766 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:43:27 crc kubenswrapper[4729]: E0127 15:43:27.051121 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:43:35 crc kubenswrapper[4729]: I0127 15:43:35.818765 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:35 crc kubenswrapper[4729]: I0127 15:43:35.877758 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:36 crc kubenswrapper[4729]: I0127 15:43:36.070789 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:43:36 crc kubenswrapper[4729]: I0127 15:43:36.871733 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pd8sq" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" containerID="cri-o://ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2" gracePeriod=2 Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.852775 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.884229 4729 generic.go:334] "Generic (PLEG): container finished" podID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerID="ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2" exitCode=0 Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.884285 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerDied","Data":"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2"} Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.884306 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pd8sq" Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.884336 4729 scope.go:117] "RemoveContainer" containerID="ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2" Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.884321 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pd8sq" event={"ID":"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003","Type":"ContainerDied","Data":"29aa026175955a1f74a1be58365b79e4beaed1aacc6d553792a73ed548c6fd5b"} Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.911325 4729 scope.go:117] "RemoveContainer" containerID="f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362" Jan 27 15:43:37 crc kubenswrapper[4729]: I0127 15:43:37.937524 4729 scope.go:117] "RemoveContainer" containerID="2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.004019 4729 scope.go:117] "RemoveContainer" containerID="ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2" Jan 27 15:43:38 crc kubenswrapper[4729]: E0127 15:43:38.006256 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2\": container with ID starting with ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2 not found: ID does not exist" containerID="ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.006350 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2"} err="failed to get container status \"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2\": rpc error: code = NotFound desc = could not find container \"ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2\": container with ID starting with ce7ffe95fb0c34aee269728a3f1dd92cd9aa9350f721f1f291668bb8b4ceeae2 not found: ID does not exist" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.006397 4729 scope.go:117] "RemoveContainer" containerID="f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362" Jan 27 15:43:38 crc kubenswrapper[4729]: E0127 15:43:38.006769 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362\": container with ID starting with f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362 not found: ID does not exist" containerID="f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.006800 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362"} err="failed to get container status \"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362\": rpc error: code = NotFound desc = could not find container \"f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362\": container with ID starting with f781d2b6d0b137457d5c4b0d099102da8ce1335b2bd9bfdee38dc97110347362 not found: ID does not exist" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.006818 4729 scope.go:117] "RemoveContainer" containerID="2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271" Jan 27 15:43:38 crc kubenswrapper[4729]: E0127 15:43:38.007267 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271\": container with ID starting with 2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271 not found: ID does not exist" containerID="2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.007289 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271"} err="failed to get container status \"2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271\": rpc error: code = NotFound desc = could not find container \"2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271\": container with ID starting with 2c11398ffbaf14ffa8d9356d7b15e5d1b0b9a628ff6473678c199a195f2c6271 not found: ID does not exist" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.050919 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm9l8\" (UniqueName: \"kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8\") pod \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.051068 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities\") pod \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.051213 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content\") pod \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\" (UID: \"2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003\") " Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.051280 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:43:38 crc kubenswrapper[4729]: E0127 15:43:38.051626 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.054304 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities" (OuterVolumeSpecName: "utilities") pod "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" (UID: "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.076981 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8" (OuterVolumeSpecName: "kube-api-access-zm9l8") pod "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" (UID: "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003"). InnerVolumeSpecName "kube-api-access-zm9l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.110512 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" (UID: "2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.154987 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.155525 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm9l8\" (UniqueName: \"kubernetes.io/projected/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-kube-api-access-zm9l8\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.155548 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.230861 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:43:38 crc kubenswrapper[4729]: I0127 15:43:38.269722 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pd8sq"] Jan 27 15:43:40 crc kubenswrapper[4729]: I0127 15:43:40.075413 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" path="/var/lib/kubelet/pods/2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003/volumes" Jan 27 15:43:49 crc kubenswrapper[4729]: I0127 15:43:49.051282 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:43:49 crc kubenswrapper[4729]: E0127 15:43:49.052194 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:44:00 crc kubenswrapper[4729]: I0127 15:44:00.051774 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:44:00 crc kubenswrapper[4729]: E0127 15:44:00.052669 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:44:13 crc kubenswrapper[4729]: I0127 15:44:13.051749 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:44:13 crc kubenswrapper[4729]: E0127 15:44:13.052694 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:44:28 crc kubenswrapper[4729]: I0127 15:44:28.051966 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:44:28 crc kubenswrapper[4729]: E0127 15:44:28.052781 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:44:43 crc kubenswrapper[4729]: I0127 15:44:43.050905 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:44:43 crc kubenswrapper[4729]: E0127 15:44:43.051722 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:44:56 crc kubenswrapper[4729]: I0127 15:44:56.052488 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:44:56 crc kubenswrapper[4729]: E0127 15:44:56.054755 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.238051 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95"] Jan 27 15:45:00 crc kubenswrapper[4729]: E0127 15:45:00.240272 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.240387 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4729]: E0127 15:45:00.240485 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.240579 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="extract-content" Jan 27 15:45:00 crc kubenswrapper[4729]: E0127 15:45:00.240695 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.240716 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="extract-utilities" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.241080 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2648b2a3-a72b-40d7-8b1a-3e7b5a0ed003" containerName="registry-server" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.242151 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.251425 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95"] Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.280024 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.284666 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.367668 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.367901 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.367961 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkj2l\" (UniqueName: \"kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.470212 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.470300 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkj2l\" (UniqueName: \"kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.470446 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.472341 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.478836 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.492551 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkj2l\" (UniqueName: \"kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l\") pod \"collect-profiles-29492145-dcd95\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:00 crc kubenswrapper[4729]: I0127 15:45:00.584858 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:01 crc kubenswrapper[4729]: I0127 15:45:01.422889 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95"] Jan 27 15:45:01 crc kubenswrapper[4729]: I0127 15:45:01.816479 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" event={"ID":"a11d87bf-75c7-42fa-b121-c67dd5df68af","Type":"ContainerStarted","Data":"a2310ae22ae2b1ed656f05bd8466b18c586d92b5be92fbaf16c43224c07468da"} Jan 27 15:45:01 crc kubenswrapper[4729]: I0127 15:45:01.816948 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" event={"ID":"a11d87bf-75c7-42fa-b121-c67dd5df68af","Type":"ContainerStarted","Data":"f635d6b4c87f22640ae67332a600e8a449d6dbdebb2a2fd6d0619360734899f4"} Jan 27 15:45:01 crc kubenswrapper[4729]: I0127 15:45:01.838979 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" podStartSLOduration=1.838956604 podStartE2EDuration="1.838956604s" podCreationTimestamp="2026-01-27 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 15:45:01.836620632 +0000 UTC m=+5988.420811636" watchObservedRunningTime="2026-01-27 15:45:01.838956604 +0000 UTC m=+5988.423147648" Jan 27 15:45:03 crc kubenswrapper[4729]: I0127 15:45:03.845294 4729 generic.go:334] "Generic (PLEG): container finished" podID="a11d87bf-75c7-42fa-b121-c67dd5df68af" containerID="a2310ae22ae2b1ed656f05bd8466b18c586d92b5be92fbaf16c43224c07468da" exitCode=0 Jan 27 15:45:03 crc kubenswrapper[4729]: I0127 15:45:03.845698 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" event={"ID":"a11d87bf-75c7-42fa-b121-c67dd5df68af","Type":"ContainerDied","Data":"a2310ae22ae2b1ed656f05bd8466b18c586d92b5be92fbaf16c43224c07468da"} Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.488404 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.671759 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume\") pod \"a11d87bf-75c7-42fa-b121-c67dd5df68af\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.672108 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume\") pod \"a11d87bf-75c7-42fa-b121-c67dd5df68af\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.672141 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkj2l\" (UniqueName: \"kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l\") pod \"a11d87bf-75c7-42fa-b121-c67dd5df68af\" (UID: \"a11d87bf-75c7-42fa-b121-c67dd5df68af\") " Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.672455 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume" (OuterVolumeSpecName: "config-volume") pod "a11d87bf-75c7-42fa-b121-c67dd5df68af" (UID: "a11d87bf-75c7-42fa-b121-c67dd5df68af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.674046 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a11d87bf-75c7-42fa-b121-c67dd5df68af-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.678471 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a11d87bf-75c7-42fa-b121-c67dd5df68af" (UID: "a11d87bf-75c7-42fa-b121-c67dd5df68af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.690442 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l" (OuterVolumeSpecName: "kube-api-access-hkj2l") pod "a11d87bf-75c7-42fa-b121-c67dd5df68af" (UID: "a11d87bf-75c7-42fa-b121-c67dd5df68af"). InnerVolumeSpecName "kube-api-access-hkj2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.777051 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a11d87bf-75c7-42fa-b121-c67dd5df68af-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.777372 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkj2l\" (UniqueName: \"kubernetes.io/projected/a11d87bf-75c7-42fa-b121-c67dd5df68af-kube-api-access-hkj2l\") on node \"crc\" DevicePath \"\"" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.868006 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" event={"ID":"a11d87bf-75c7-42fa-b121-c67dd5df68af","Type":"ContainerDied","Data":"f635d6b4c87f22640ae67332a600e8a449d6dbdebb2a2fd6d0619360734899f4"} Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.868059 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492145-dcd95" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:05.868063 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f635d6b4c87f22640ae67332a600e8a449d6dbdebb2a2fd6d0619360734899f4" Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:06.043649 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh"] Jan 27 15:45:06 crc kubenswrapper[4729]: I0127 15:45:06.065451 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492100-g9xxh"] Jan 27 15:45:08 crc kubenswrapper[4729]: I0127 15:45:08.068377 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdeeb7ac-7c92-41c9-b09c-aaaca507e15b" path="/var/lib/kubelet/pods/bdeeb7ac-7c92-41c9-b09c-aaaca507e15b/volumes" Jan 27 15:45:09 crc kubenswrapper[4729]: I0127 15:45:09.051482 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:45:09 crc kubenswrapper[4729]: E0127 15:45:09.052322 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:45:16 crc kubenswrapper[4729]: I0127 15:45:16.215745 4729 scope.go:117] "RemoveContainer" containerID="53d92d5287bd09d84f67cb62e3b754f06541eeb7a6a47271c86791a8f532d122" Jan 27 15:45:23 crc kubenswrapper[4729]: I0127 15:45:23.051031 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:45:23 crc kubenswrapper[4729]: E0127 15:45:23.051917 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:45:34 crc kubenswrapper[4729]: I0127 15:45:34.063379 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:45:34 crc kubenswrapper[4729]: E0127 15:45:34.064324 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:45:46 crc kubenswrapper[4729]: I0127 15:45:46.052343 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:45:46 crc kubenswrapper[4729]: E0127 15:45:46.053099 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:45:57 crc kubenswrapper[4729]: I0127 15:45:57.051508 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:45:57 crc kubenswrapper[4729]: E0127 15:45:57.052617 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:46:08 crc kubenswrapper[4729]: I0127 15:46:08.051802 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:46:08 crc kubenswrapper[4729]: E0127 15:46:08.052674 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:46:21 crc kubenswrapper[4729]: I0127 15:46:21.051871 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:46:21 crc kubenswrapper[4729]: E0127 15:46:21.052815 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.451606 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:46:34 crc kubenswrapper[4729]: E0127 15:46:34.452796 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11d87bf-75c7-42fa-b121-c67dd5df68af" containerName="collect-profiles" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.452817 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11d87bf-75c7-42fa-b121-c67dd5df68af" containerName="collect-profiles" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.453125 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11d87bf-75c7-42fa-b121-c67dd5df68af" containerName="collect-profiles" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.456004 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.465006 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.584182 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.584267 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.584316 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bct\" (UniqueName: \"kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.686770 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.686897 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.686966 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bct\" (UniqueName: \"kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.688639 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.688961 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.713962 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bct\" (UniqueName: \"kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct\") pod \"community-operators-nc92f\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:34 crc kubenswrapper[4729]: I0127 15:46:34.778298 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:35 crc kubenswrapper[4729]: I0127 15:46:35.292939 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:46:35 crc kubenswrapper[4729]: I0127 15:46:35.857448 4729 generic.go:334] "Generic (PLEG): container finished" podID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerID="640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372" exitCode=0 Jan 27 15:46:35 crc kubenswrapper[4729]: I0127 15:46:35.857798 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerDied","Data":"640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372"} Jan 27 15:46:35 crc kubenswrapper[4729]: I0127 15:46:35.857838 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerStarted","Data":"487ea6781444ad1dafc3fe753c3b867d05997a69408d10847de38b34b2ed7379"} Jan 27 15:46:36 crc kubenswrapper[4729]: I0127 15:46:36.051707 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:46:36 crc kubenswrapper[4729]: E0127 15:46:36.052040 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:46:37 crc kubenswrapper[4729]: I0127 15:46:37.898276 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerStarted","Data":"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5"} Jan 27 15:46:40 crc kubenswrapper[4729]: I0127 15:46:40.935840 4729 generic.go:334] "Generic (PLEG): container finished" podID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerID="7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5" exitCode=0 Jan 27 15:46:40 crc kubenswrapper[4729]: I0127 15:46:40.935941 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerDied","Data":"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5"} Jan 27 15:46:41 crc kubenswrapper[4729]: I0127 15:46:41.949661 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerStarted","Data":"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc"} Jan 27 15:46:41 crc kubenswrapper[4729]: I0127 15:46:41.971840 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nc92f" podStartSLOduration=2.38396395 podStartE2EDuration="7.97181864s" podCreationTimestamp="2026-01-27 15:46:34 +0000 UTC" firstStartedPulling="2026-01-27 15:46:35.860002086 +0000 UTC m=+6082.444193111" lastFinishedPulling="2026-01-27 15:46:41.447856797 +0000 UTC m=+6088.032047801" observedRunningTime="2026-01-27 15:46:41.96882539 +0000 UTC m=+6088.553016404" watchObservedRunningTime="2026-01-27 15:46:41.97181864 +0000 UTC m=+6088.556009644" Jan 27 15:46:44 crc kubenswrapper[4729]: I0127 15:46:44.778988 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:44 crc kubenswrapper[4729]: I0127 15:46:44.779527 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:46:45 crc kubenswrapper[4729]: I0127 15:46:45.830196 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nc92f" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" probeResult="failure" output=< Jan 27 15:46:45 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:46:45 crc kubenswrapper[4729]: > Jan 27 15:46:48 crc kubenswrapper[4729]: I0127 15:46:48.052331 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:46:48 crc kubenswrapper[4729]: E0127 15:46:48.053409 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:46:55 crc kubenswrapper[4729]: I0127 15:46:55.828745 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nc92f" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" probeResult="failure" output=< Jan 27 15:46:55 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:46:55 crc kubenswrapper[4729]: > Jan 27 15:47:00 crc kubenswrapper[4729]: I0127 15:47:00.052105 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:47:00 crc kubenswrapper[4729]: E0127 15:47:00.053090 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:47:04 crc kubenswrapper[4729]: I0127 15:47:04.835319 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:47:04 crc kubenswrapper[4729]: I0127 15:47:04.901427 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:47:05 crc kubenswrapper[4729]: I0127 15:47:05.657547 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.222095 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nc92f" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" containerID="cri-o://21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc" gracePeriod=2 Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.827740 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.973086 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content\") pod \"d992b2dc-f1a1-405a-8372-79206b9207bd\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.973562 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities\") pod \"d992b2dc-f1a1-405a-8372-79206b9207bd\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.973966 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8bct\" (UniqueName: \"kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct\") pod \"d992b2dc-f1a1-405a-8372-79206b9207bd\" (UID: \"d992b2dc-f1a1-405a-8372-79206b9207bd\") " Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.974368 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities" (OuterVolumeSpecName: "utilities") pod "d992b2dc-f1a1-405a-8372-79206b9207bd" (UID: "d992b2dc-f1a1-405a-8372-79206b9207bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.975053 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:06 crc kubenswrapper[4729]: I0127 15:47:06.985748 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct" (OuterVolumeSpecName: "kube-api-access-z8bct") pod "d992b2dc-f1a1-405a-8372-79206b9207bd" (UID: "d992b2dc-f1a1-405a-8372-79206b9207bd"). InnerVolumeSpecName "kube-api-access-z8bct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.036724 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d992b2dc-f1a1-405a-8372-79206b9207bd" (UID: "d992b2dc-f1a1-405a-8372-79206b9207bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.077670 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8bct\" (UniqueName: \"kubernetes.io/projected/d992b2dc-f1a1-405a-8372-79206b9207bd-kube-api-access-z8bct\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.077714 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d992b2dc-f1a1-405a-8372-79206b9207bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.235061 4729 generic.go:334] "Generic (PLEG): container finished" podID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerID="21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc" exitCode=0 Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.235118 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerDied","Data":"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc"} Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.235141 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nc92f" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.235167 4729 scope.go:117] "RemoveContainer" containerID="21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.235153 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nc92f" event={"ID":"d992b2dc-f1a1-405a-8372-79206b9207bd","Type":"ContainerDied","Data":"487ea6781444ad1dafc3fe753c3b867d05997a69408d10847de38b34b2ed7379"} Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.266258 4729 scope.go:117] "RemoveContainer" containerID="7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.290448 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.295695 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nc92f"] Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.312774 4729 scope.go:117] "RemoveContainer" containerID="640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.358494 4729 scope.go:117] "RemoveContainer" containerID="21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc" Jan 27 15:47:07 crc kubenswrapper[4729]: E0127 15:47:07.359190 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc\": container with ID starting with 21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc not found: ID does not exist" containerID="21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.359296 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc"} err="failed to get container status \"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc\": rpc error: code = NotFound desc = could not find container \"21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc\": container with ID starting with 21a56da78a08386447c35f5f91b86e629543420c8d70caf5518f652df5c734fc not found: ID does not exist" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.359340 4729 scope.go:117] "RemoveContainer" containerID="7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5" Jan 27 15:47:07 crc kubenswrapper[4729]: E0127 15:47:07.359771 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5\": container with ID starting with 7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5 not found: ID does not exist" containerID="7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.359818 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5"} err="failed to get container status \"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5\": rpc error: code = NotFound desc = could not find container \"7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5\": container with ID starting with 7bc418809f542e2721f6fbe6310726ee8947c37fe888fc871e4cb6e1a5454fc5 not found: ID does not exist" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.359858 4729 scope.go:117] "RemoveContainer" containerID="640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372" Jan 27 15:47:07 crc kubenswrapper[4729]: E0127 15:47:07.360361 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372\": container with ID starting with 640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372 not found: ID does not exist" containerID="640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372" Jan 27 15:47:07 crc kubenswrapper[4729]: I0127 15:47:07.360490 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372"} err="failed to get container status \"640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372\": rpc error: code = NotFound desc = could not find container \"640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372\": container with ID starting with 640cb6c69cf89912446e2ffdd1bb8d5de51c57dd9728807729d80f9da639e372 not found: ID does not exist" Jan 27 15:47:08 crc kubenswrapper[4729]: I0127 15:47:08.069131 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" path="/var/lib/kubelet/pods/d992b2dc-f1a1-405a-8372-79206b9207bd/volumes" Jan 27 15:47:12 crc kubenswrapper[4729]: I0127 15:47:12.051500 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:47:12 crc kubenswrapper[4729]: E0127 15:47:12.052374 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:47:26 crc kubenswrapper[4729]: I0127 15:47:26.051639 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:47:26 crc kubenswrapper[4729]: I0127 15:47:26.450419 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6"} Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.565887 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:47:44 crc kubenswrapper[4729]: E0127 15:47:44.567037 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.567052 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" Jan 27 15:47:44 crc kubenswrapper[4729]: E0127 15:47:44.567084 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="extract-content" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.567089 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="extract-content" Jan 27 15:47:44 crc kubenswrapper[4729]: E0127 15:47:44.567116 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="extract-utilities" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.567122 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="extract-utilities" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.567370 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d992b2dc-f1a1-405a-8372-79206b9207bd" containerName="registry-server" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.589694 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.609076 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.721099 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.721251 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.721310 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k9p\" (UniqueName: \"kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.823899 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k9p\" (UniqueName: \"kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.824257 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.824327 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.824731 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.824834 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.847196 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k9p\" (UniqueName: \"kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p\") pod \"redhat-marketplace-jj4xc\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:44 crc kubenswrapper[4729]: I0127 15:47:44.936667 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:45 crc kubenswrapper[4729]: I0127 15:47:45.514272 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:47:45 crc kubenswrapper[4729]: I0127 15:47:45.732036 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerStarted","Data":"fcef2ceb1a7111e385f3560c248e0b6a2ccb5579839b0fb7dbb3edd44a65c0da"} Jan 27 15:47:46 crc kubenswrapper[4729]: I0127 15:47:46.743418 4729 generic.go:334] "Generic (PLEG): container finished" podID="359cb11e-11aa-41e5-866c-49247a412d81" containerID="25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923" exitCode=0 Jan 27 15:47:46 crc kubenswrapper[4729]: I0127 15:47:46.743598 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerDied","Data":"25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923"} Jan 27 15:47:48 crc kubenswrapper[4729]: I0127 15:47:48.764944 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerStarted","Data":"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd"} Jan 27 15:47:49 crc kubenswrapper[4729]: I0127 15:47:49.785677 4729 generic.go:334] "Generic (PLEG): container finished" podID="359cb11e-11aa-41e5-866c-49247a412d81" containerID="a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd" exitCode=0 Jan 27 15:47:49 crc kubenswrapper[4729]: I0127 15:47:49.785832 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerDied","Data":"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd"} Jan 27 15:47:50 crc kubenswrapper[4729]: I0127 15:47:50.802243 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerStarted","Data":"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9"} Jan 27 15:47:50 crc kubenswrapper[4729]: I0127 15:47:50.824511 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jj4xc" podStartSLOduration=3.369195242 podStartE2EDuration="6.82448806s" podCreationTimestamp="2026-01-27 15:47:44 +0000 UTC" firstStartedPulling="2026-01-27 15:47:46.746128336 +0000 UTC m=+6153.330319340" lastFinishedPulling="2026-01-27 15:47:50.201421154 +0000 UTC m=+6156.785612158" observedRunningTime="2026-01-27 15:47:50.821178412 +0000 UTC m=+6157.405369436" watchObservedRunningTime="2026-01-27 15:47:50.82448806 +0000 UTC m=+6157.408679054" Jan 27 15:47:54 crc kubenswrapper[4729]: I0127 15:47:54.937150 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:54 crc kubenswrapper[4729]: I0127 15:47:54.937577 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:47:55 crc kubenswrapper[4729]: I0127 15:47:55.988844 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jj4xc" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="registry-server" probeResult="failure" output=< Jan 27 15:47:55 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:47:55 crc kubenswrapper[4729]: > Jan 27 15:48:04 crc kubenswrapper[4729]: I0127 15:48:04.992046 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:48:05 crc kubenswrapper[4729]: I0127 15:48:05.078777 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:48:05 crc kubenswrapper[4729]: I0127 15:48:05.293736 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.000964 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jj4xc" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="registry-server" containerID="cri-o://14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9" gracePeriod=2 Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.658294 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.778242 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities\") pod \"359cb11e-11aa-41e5-866c-49247a412d81\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.778360 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content\") pod \"359cb11e-11aa-41e5-866c-49247a412d81\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.778426 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8k9p\" (UniqueName: \"kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p\") pod \"359cb11e-11aa-41e5-866c-49247a412d81\" (UID: \"359cb11e-11aa-41e5-866c-49247a412d81\") " Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.780722 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities" (OuterVolumeSpecName: "utilities") pod "359cb11e-11aa-41e5-866c-49247a412d81" (UID: "359cb11e-11aa-41e5-866c-49247a412d81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.794700 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p" (OuterVolumeSpecName: "kube-api-access-n8k9p") pod "359cb11e-11aa-41e5-866c-49247a412d81" (UID: "359cb11e-11aa-41e5-866c-49247a412d81"). InnerVolumeSpecName "kube-api-access-n8k9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.800410 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "359cb11e-11aa-41e5-866c-49247a412d81" (UID: "359cb11e-11aa-41e5-866c-49247a412d81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.882086 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.882130 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8k9p\" (UniqueName: \"kubernetes.io/projected/359cb11e-11aa-41e5-866c-49247a412d81-kube-api-access-n8k9p\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:07 crc kubenswrapper[4729]: I0127 15:48:07.882144 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/359cb11e-11aa-41e5-866c-49247a412d81-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.014934 4729 generic.go:334] "Generic (PLEG): container finished" podID="359cb11e-11aa-41e5-866c-49247a412d81" containerID="14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9" exitCode=0 Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.014985 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerDied","Data":"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9"} Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.015016 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jj4xc" event={"ID":"359cb11e-11aa-41e5-866c-49247a412d81","Type":"ContainerDied","Data":"fcef2ceb1a7111e385f3560c248e0b6a2ccb5579839b0fb7dbb3edd44a65c0da"} Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.015037 4729 scope.go:117] "RemoveContainer" containerID="14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.015036 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jj4xc" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.260203 4729 scope.go:117] "RemoveContainer" containerID="a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.339818 4729 scope.go:117] "RemoveContainer" containerID="25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.340802 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.348745 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jj4xc"] Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.391599 4729 scope.go:117] "RemoveContainer" containerID="14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9" Jan 27 15:48:08 crc kubenswrapper[4729]: E0127 15:48:08.392469 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9\": container with ID starting with 14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9 not found: ID does not exist" containerID="14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.392511 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9"} err="failed to get container status \"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9\": rpc error: code = NotFound desc = could not find container \"14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9\": container with ID starting with 14da336253b47b529b78039eb8772240a8238e4504f003236603cc75514ecbc9 not found: ID does not exist" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.392560 4729 scope.go:117] "RemoveContainer" containerID="a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd" Jan 27 15:48:08 crc kubenswrapper[4729]: E0127 15:48:08.393058 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd\": container with ID starting with a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd not found: ID does not exist" containerID="a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.393093 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd"} err="failed to get container status \"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd\": rpc error: code = NotFound desc = could not find container \"a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd\": container with ID starting with a70629e6691999479cfc31432555fcbe43dbe29ec1d8c89109375fbb0ded36fd not found: ID does not exist" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.393111 4729 scope.go:117] "RemoveContainer" containerID="25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923" Jan 27 15:48:08 crc kubenswrapper[4729]: E0127 15:48:08.393439 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923\": container with ID starting with 25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923 not found: ID does not exist" containerID="25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923" Jan 27 15:48:08 crc kubenswrapper[4729]: I0127 15:48:08.393507 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923"} err="failed to get container status \"25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923\": rpc error: code = NotFound desc = could not find container \"25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923\": container with ID starting with 25886ff716773ddab2a0c890d0224bce5174c774ab81b95fe901745dfd0ea923 not found: ID does not exist" Jan 27 15:48:10 crc kubenswrapper[4729]: I0127 15:48:10.066166 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359cb11e-11aa-41e5-866c-49247a412d81" path="/var/lib/kubelet/pods/359cb11e-11aa-41e5-866c-49247a412d81/volumes" Jan 27 15:49:52 crc kubenswrapper[4729]: I0127 15:49:52.654975 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:49:52 crc kubenswrapper[4729]: I0127 15:49:52.655591 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.655419 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.656047 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.681261 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:50:22 crc kubenswrapper[4729]: E0127 15:50:22.681777 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="extract-content" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.681795 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="extract-content" Jan 27 15:50:22 crc kubenswrapper[4729]: E0127 15:50:22.681812 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="extract-utilities" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.681819 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="extract-utilities" Jan 27 15:50:22 crc kubenswrapper[4729]: E0127 15:50:22.681850 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="registry-server" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.681856 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="registry-server" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.684360 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="359cb11e-11aa-41e5-866c-49247a412d81" containerName="registry-server" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.687035 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.703918 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.841822 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krgbh\" (UniqueName: \"kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.841989 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.842053 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.944599 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krgbh\" (UniqueName: \"kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.944740 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.944806 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.945328 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.945369 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:22 crc kubenswrapper[4729]: I0127 15:50:22.970741 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krgbh\" (UniqueName: \"kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh\") pod \"redhat-operators-8rjj7\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:23 crc kubenswrapper[4729]: I0127 15:50:23.009122 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:23 crc kubenswrapper[4729]: I0127 15:50:23.896463 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:50:24 crc kubenswrapper[4729]: I0127 15:50:24.513471 4729 generic.go:334] "Generic (PLEG): container finished" podID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerID="2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326" exitCode=0 Jan 27 15:50:24 crc kubenswrapper[4729]: I0127 15:50:24.513578 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerDied","Data":"2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326"} Jan 27 15:50:24 crc kubenswrapper[4729]: I0127 15:50:24.513783 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerStarted","Data":"636577fa9d556c5d450091bcf8bd5aa39aaa79ad8edcd647e29d86ddf6cabc05"} Jan 27 15:50:24 crc kubenswrapper[4729]: I0127 15:50:24.516619 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:50:25 crc kubenswrapper[4729]: I0127 15:50:25.554571 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerStarted","Data":"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05"} Jan 27 15:50:31 crc kubenswrapper[4729]: I0127 15:50:31.624608 4729 generic.go:334] "Generic (PLEG): container finished" podID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerID="746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05" exitCode=0 Jan 27 15:50:31 crc kubenswrapper[4729]: I0127 15:50:31.624690 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerDied","Data":"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05"} Jan 27 15:50:32 crc kubenswrapper[4729]: I0127 15:50:32.645563 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerStarted","Data":"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39"} Jan 27 15:50:32 crc kubenswrapper[4729]: I0127 15:50:32.670369 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8rjj7" podStartSLOduration=3.158412149 podStartE2EDuration="10.670354621s" podCreationTimestamp="2026-01-27 15:50:22 +0000 UTC" firstStartedPulling="2026-01-27 15:50:24.515916513 +0000 UTC m=+6311.100107517" lastFinishedPulling="2026-01-27 15:50:32.027858985 +0000 UTC m=+6318.612049989" observedRunningTime="2026-01-27 15:50:32.668452281 +0000 UTC m=+6319.252643285" watchObservedRunningTime="2026-01-27 15:50:32.670354621 +0000 UTC m=+6319.254545625" Jan 27 15:50:33 crc kubenswrapper[4729]: I0127 15:50:33.009718 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:33 crc kubenswrapper[4729]: I0127 15:50:33.009764 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:50:34 crc kubenswrapper[4729]: I0127 15:50:34.063785 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rjj7" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" probeResult="failure" output=< Jan 27 15:50:34 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:50:34 crc kubenswrapper[4729]: > Jan 27 15:50:44 crc kubenswrapper[4729]: I0127 15:50:44.070482 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rjj7" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" probeResult="failure" output=< Jan 27 15:50:44 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:50:44 crc kubenswrapper[4729]: > Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.654762 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.655427 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.655479 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.656346 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.656403 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6" gracePeriod=600 Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.861920 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6" exitCode=0 Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.861975 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6"} Jan 27 15:50:52 crc kubenswrapper[4729]: I0127 15:50:52.862012 4729 scope.go:117] "RemoveContainer" containerID="30cc7b3c40d3b0a4e3304589b3f013c92d5257cdd48e37ecb3b4b428fb740c6e" Jan 27 15:50:53 crc kubenswrapper[4729]: I0127 15:50:53.875338 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771"} Jan 27 15:50:54 crc kubenswrapper[4729]: I0127 15:50:54.064773 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rjj7" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" probeResult="failure" output=< Jan 27 15:50:54 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:50:54 crc kubenswrapper[4729]: > Jan 27 15:51:04 crc kubenswrapper[4729]: I0127 15:51:04.141425 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8rjj7" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" probeResult="failure" output=< Jan 27 15:51:04 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:51:04 crc kubenswrapper[4729]: > Jan 27 15:51:13 crc kubenswrapper[4729]: I0127 15:51:13.063890 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:51:13 crc kubenswrapper[4729]: I0127 15:51:13.133282 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:51:13 crc kubenswrapper[4729]: I0127 15:51:13.307627 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.142290 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8rjj7" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" containerID="cri-o://d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39" gracePeriod=2 Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.921281 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.992286 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities\") pod \"815d7454-fed5-4dbe-a388-ac9793b0a495\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.992734 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krgbh\" (UniqueName: \"kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh\") pod \"815d7454-fed5-4dbe-a388-ac9793b0a495\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.992824 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content\") pod \"815d7454-fed5-4dbe-a388-ac9793b0a495\" (UID: \"815d7454-fed5-4dbe-a388-ac9793b0a495\") " Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.993217 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities" (OuterVolumeSpecName: "utilities") pod "815d7454-fed5-4dbe-a388-ac9793b0a495" (UID: "815d7454-fed5-4dbe-a388-ac9793b0a495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:51:14 crc kubenswrapper[4729]: I0127 15:51:14.993565 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.003147 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh" (OuterVolumeSpecName: "kube-api-access-krgbh") pod "815d7454-fed5-4dbe-a388-ac9793b0a495" (UID: "815d7454-fed5-4dbe-a388-ac9793b0a495"). InnerVolumeSpecName "kube-api-access-krgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.097849 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krgbh\" (UniqueName: \"kubernetes.io/projected/815d7454-fed5-4dbe-a388-ac9793b0a495-kube-api-access-krgbh\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.100921 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "815d7454-fed5-4dbe-a388-ac9793b0a495" (UID: "815d7454-fed5-4dbe-a388-ac9793b0a495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.161106 4729 generic.go:334] "Generic (PLEG): container finished" podID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerID="d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39" exitCode=0 Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.162055 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerDied","Data":"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39"} Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.162129 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8rjj7" event={"ID":"815d7454-fed5-4dbe-a388-ac9793b0a495","Type":"ContainerDied","Data":"636577fa9d556c5d450091bcf8bd5aa39aaa79ad8edcd647e29d86ddf6cabc05"} Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.162149 4729 scope.go:117] "RemoveContainer" containerID="d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.162059 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8rjj7" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.189371 4729 scope.go:117] "RemoveContainer" containerID="746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.199745 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/815d7454-fed5-4dbe-a388-ac9793b0a495-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.209223 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.224425 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8rjj7"] Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.231766 4729 scope.go:117] "RemoveContainer" containerID="2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.285252 4729 scope.go:117] "RemoveContainer" containerID="d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39" Jan 27 15:51:15 crc kubenswrapper[4729]: E0127 15:51:15.285760 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39\": container with ID starting with d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39 not found: ID does not exist" containerID="d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.285799 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39"} err="failed to get container status \"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39\": rpc error: code = NotFound desc = could not find container \"d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39\": container with ID starting with d6c9d101463f749d506774349f3effa84381e3c17bf0254cbb643cba2a212c39 not found: ID does not exist" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.285823 4729 scope.go:117] "RemoveContainer" containerID="746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05" Jan 27 15:51:15 crc kubenswrapper[4729]: E0127 15:51:15.286427 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05\": container with ID starting with 746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05 not found: ID does not exist" containerID="746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.286480 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05"} err="failed to get container status \"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05\": rpc error: code = NotFound desc = could not find container \"746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05\": container with ID starting with 746b60117fc29ec6a34592eaa917b74a4038a5b6b523a035c82b4324e1651a05 not found: ID does not exist" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.286516 4729 scope.go:117] "RemoveContainer" containerID="2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326" Jan 27 15:51:15 crc kubenswrapper[4729]: E0127 15:51:15.286936 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326\": container with ID starting with 2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326 not found: ID does not exist" containerID="2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326" Jan 27 15:51:15 crc kubenswrapper[4729]: I0127 15:51:15.286971 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326"} err="failed to get container status \"2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326\": rpc error: code = NotFound desc = could not find container \"2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326\": container with ID starting with 2ab9d07b1f3d17b22b9131a66a38eba3e921b05086ba04aec372d76b6474c326 not found: ID does not exist" Jan 27 15:51:16 crc kubenswrapper[4729]: I0127 15:51:16.068553 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" path="/var/lib/kubelet/pods/815d7454-fed5-4dbe-a388-ac9793b0a495/volumes" Jan 27 15:53:22 crc kubenswrapper[4729]: I0127 15:53:22.654839 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:22 crc kubenswrapper[4729]: I0127 15:53:22.655456 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:52 crc kubenswrapper[4729]: I0127 15:53:52.654986 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:53:52 crc kubenswrapper[4729]: I0127 15:53:52.655526 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.549688 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:53:53 crc kubenswrapper[4729]: E0127 15:53:53.550604 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="extract-content" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.550624 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="extract-content" Jan 27 15:53:53 crc kubenswrapper[4729]: E0127 15:53:53.550674 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.550681 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4729]: E0127 15:53:53.550697 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="extract-utilities" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.550703 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="extract-utilities" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.550932 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="815d7454-fed5-4dbe-a388-ac9793b0a495" containerName="registry-server" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.553246 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.567985 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.671846 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bd5\" (UniqueName: \"kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.672230 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.672468 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.774519 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.774714 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bd5\" (UniqueName: \"kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.774950 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.775397 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.775449 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.797362 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bd5\" (UniqueName: \"kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5\") pod \"certified-operators-jkvct\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:53 crc kubenswrapper[4729]: I0127 15:53:53.874948 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:53:54 crc kubenswrapper[4729]: I0127 15:53:54.419054 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:53:54 crc kubenswrapper[4729]: W0127 15:53:54.427773 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8924fb_822e_4b94_b7be_5d1bec5ded33.slice/crio-ddeef34a04dcc3b43e1724af77a5e81ed8d79645fe7ebd9a110b6b5227d8fb2e WatchSource:0}: Error finding container ddeef34a04dcc3b43e1724af77a5e81ed8d79645fe7ebd9a110b6b5227d8fb2e: Status 404 returned error can't find the container with id ddeef34a04dcc3b43e1724af77a5e81ed8d79645fe7ebd9a110b6b5227d8fb2e Jan 27 15:53:54 crc kubenswrapper[4729]: I0127 15:53:54.947262 4729 generic.go:334] "Generic (PLEG): container finished" podID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerID="39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f" exitCode=0 Jan 27 15:53:54 crc kubenswrapper[4729]: I0127 15:53:54.947362 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerDied","Data":"39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f"} Jan 27 15:53:54 crc kubenswrapper[4729]: I0127 15:53:54.947584 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerStarted","Data":"ddeef34a04dcc3b43e1724af77a5e81ed8d79645fe7ebd9a110b6b5227d8fb2e"} Jan 27 15:53:55 crc kubenswrapper[4729]: I0127 15:53:55.965544 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerStarted","Data":"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300"} Jan 27 15:53:57 crc kubenswrapper[4729]: I0127 15:53:57.989315 4729 generic.go:334] "Generic (PLEG): container finished" podID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerID="e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300" exitCode=0 Jan 27 15:53:57 crc kubenswrapper[4729]: I0127 15:53:57.989418 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerDied","Data":"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300"} Jan 27 15:53:59 crc kubenswrapper[4729]: I0127 15:53:59.003600 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerStarted","Data":"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e"} Jan 27 15:53:59 crc kubenswrapper[4729]: I0127 15:53:59.034652 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jkvct" podStartSLOduration=2.511601437 podStartE2EDuration="6.03463156s" podCreationTimestamp="2026-01-27 15:53:53 +0000 UTC" firstStartedPulling="2026-01-27 15:53:54.953396005 +0000 UTC m=+6521.537587009" lastFinishedPulling="2026-01-27 15:53:58.476426128 +0000 UTC m=+6525.060617132" observedRunningTime="2026-01-27 15:53:59.030368687 +0000 UTC m=+6525.614559711" watchObservedRunningTime="2026-01-27 15:53:59.03463156 +0000 UTC m=+6525.618822564" Jan 27 15:54:03 crc kubenswrapper[4729]: I0127 15:54:03.876654 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:03 crc kubenswrapper[4729]: I0127 15:54:03.877753 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:03 crc kubenswrapper[4729]: I0127 15:54:03.927600 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:04 crc kubenswrapper[4729]: I0127 15:54:04.130440 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:04 crc kubenswrapper[4729]: I0127 15:54:04.183171 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.095310 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jkvct" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="registry-server" containerID="cri-o://9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e" gracePeriod=2 Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.667789 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.839107 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content\") pod \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.839331 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94bd5\" (UniqueName: \"kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5\") pod \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.839387 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities\") pod \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\" (UID: \"cd8924fb-822e-4b94-b7be-5d1bec5ded33\") " Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.841334 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities" (OuterVolumeSpecName: "utilities") pod "cd8924fb-822e-4b94-b7be-5d1bec5ded33" (UID: "cd8924fb-822e-4b94-b7be-5d1bec5ded33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.864119 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5" (OuterVolumeSpecName: "kube-api-access-94bd5") pod "cd8924fb-822e-4b94-b7be-5d1bec5ded33" (UID: "cd8924fb-822e-4b94-b7be-5d1bec5ded33"). InnerVolumeSpecName "kube-api-access-94bd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.928772 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8924fb-822e-4b94-b7be-5d1bec5ded33" (UID: "cd8924fb-822e-4b94-b7be-5d1bec5ded33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.948105 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94bd5\" (UniqueName: \"kubernetes.io/projected/cd8924fb-822e-4b94-b7be-5d1bec5ded33-kube-api-access-94bd5\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.948151 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:06 crc kubenswrapper[4729]: I0127 15:54:06.948163 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8924fb-822e-4b94-b7be-5d1bec5ded33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.108795 4729 generic.go:334] "Generic (PLEG): container finished" podID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerID="9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e" exitCode=0 Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.108981 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerDied","Data":"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e"} Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.109777 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jkvct" event={"ID":"cd8924fb-822e-4b94-b7be-5d1bec5ded33","Type":"ContainerDied","Data":"ddeef34a04dcc3b43e1724af77a5e81ed8d79645fe7ebd9a110b6b5227d8fb2e"} Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.109088 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jkvct" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.109906 4729 scope.go:117] "RemoveContainer" containerID="9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.147814 4729 scope.go:117] "RemoveContainer" containerID="e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.152026 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.174484 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jkvct"] Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.178316 4729 scope.go:117] "RemoveContainer" containerID="39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.224200 4729 scope.go:117] "RemoveContainer" containerID="9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e" Jan 27 15:54:07 crc kubenswrapper[4729]: E0127 15:54:07.224706 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e\": container with ID starting with 9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e not found: ID does not exist" containerID="9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.224850 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e"} err="failed to get container status \"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e\": rpc error: code = NotFound desc = could not find container \"9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e\": container with ID starting with 9401600b3aafc1f6115c47aa1e90ecf5c007897a47972ddc8dccd6a4ae6f1a6e not found: ID does not exist" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.225059 4729 scope.go:117] "RemoveContainer" containerID="e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300" Jan 27 15:54:07 crc kubenswrapper[4729]: E0127 15:54:07.225408 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300\": container with ID starting with e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300 not found: ID does not exist" containerID="e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.225492 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300"} err="failed to get container status \"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300\": rpc error: code = NotFound desc = could not find container \"e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300\": container with ID starting with e6a8324cacc34a5f1a331abdd4386128baa3e2317a3e42321a576e6b5a2b1300 not found: ID does not exist" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.225596 4729 scope.go:117] "RemoveContainer" containerID="39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f" Jan 27 15:54:07 crc kubenswrapper[4729]: E0127 15:54:07.225868 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f\": container with ID starting with 39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f not found: ID does not exist" containerID="39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f" Jan 27 15:54:07 crc kubenswrapper[4729]: I0127 15:54:07.226014 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f"} err="failed to get container status \"39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f\": rpc error: code = NotFound desc = could not find container \"39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f\": container with ID starting with 39060e5f486e3e9ce3dd9268f81f3a99acca85f6f7bd9d99493df8dfb7067d8f not found: ID does not exist" Jan 27 15:54:08 crc kubenswrapper[4729]: I0127 15:54:08.065094 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" path="/var/lib/kubelet/pods/cd8924fb-822e-4b94-b7be-5d1bec5ded33/volumes" Jan 27 15:54:22 crc kubenswrapper[4729]: I0127 15:54:22.655383 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 15:54:22 crc kubenswrapper[4729]: I0127 15:54:22.656059 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 15:54:22 crc kubenswrapper[4729]: I0127 15:54:22.656111 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 15:54:22 crc kubenswrapper[4729]: I0127 15:54:22.657016 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 15:54:22 crc kubenswrapper[4729]: I0127 15:54:22.657070 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" gracePeriod=600 Jan 27 15:54:22 crc kubenswrapper[4729]: E0127 15:54:22.882850 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:54:23 crc kubenswrapper[4729]: I0127 15:54:23.288249 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" exitCode=0 Jan 27 15:54:23 crc kubenswrapper[4729]: I0127 15:54:23.288308 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771"} Jan 27 15:54:23 crc kubenswrapper[4729]: I0127 15:54:23.288375 4729 scope.go:117] "RemoveContainer" containerID="6c5b09e22865c0fa51aa21d83edb7fa1332941e2d3d7ea84bff8ea149a2ff7f6" Jan 27 15:54:23 crc kubenswrapper[4729]: I0127 15:54:23.289155 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:54:23 crc kubenswrapper[4729]: E0127 15:54:23.289477 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:54:35 crc kubenswrapper[4729]: I0127 15:54:35.052017 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:54:35 crc kubenswrapper[4729]: E0127 15:54:35.053213 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:54:47 crc kubenswrapper[4729]: I0127 15:54:47.051701 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:54:47 crc kubenswrapper[4729]: E0127 15:54:47.052491 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:55:01 crc kubenswrapper[4729]: I0127 15:55:01.051987 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:55:01 crc kubenswrapper[4729]: E0127 15:55:01.052975 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:55:14 crc kubenswrapper[4729]: I0127 15:55:14.064233 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:55:14 crc kubenswrapper[4729]: E0127 15:55:14.065254 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:55:27 crc kubenswrapper[4729]: I0127 15:55:27.051890 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:55:27 crc kubenswrapper[4729]: E0127 15:55:27.052725 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:55:40 crc kubenswrapper[4729]: I0127 15:55:40.051975 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:55:40 crc kubenswrapper[4729]: E0127 15:55:40.052921 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:55:53 crc kubenswrapper[4729]: I0127 15:55:53.051857 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:55:53 crc kubenswrapper[4729]: E0127 15:55:53.053026 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:56:04 crc kubenswrapper[4729]: I0127 15:56:04.063747 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:56:04 crc kubenswrapper[4729]: E0127 15:56:04.064814 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:56:18 crc kubenswrapper[4729]: I0127 15:56:18.051128 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:56:18 crc kubenswrapper[4729]: E0127 15:56:18.052011 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:56:29 crc kubenswrapper[4729]: I0127 15:56:29.051607 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:56:29 crc kubenswrapper[4729]: E0127 15:56:29.052622 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:56:41 crc kubenswrapper[4729]: I0127 15:56:41.052186 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:56:41 crc kubenswrapper[4729]: E0127 15:56:41.053543 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:56:55 crc kubenswrapper[4729]: I0127 15:56:55.051057 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:56:55 crc kubenswrapper[4729]: E0127 15:56:55.051750 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:57:07 crc kubenswrapper[4729]: I0127 15:57:07.139474 4729 generic.go:334] "Generic (PLEG): container finished" podID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" containerID="a51181e33ab61a5c6f36531d88077f30d4690d52faf4a207447026dec8d841a6" exitCode=0 Jan 27 15:57:07 crc kubenswrapper[4729]: I0127 15:57:07.139870 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"440fdd61-ad16-4ee7-bf64-2754db1c5db8","Type":"ContainerDied","Data":"a51181e33ab61a5c6f36531d88077f30d4690d52faf4a207447026dec8d841a6"} Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.801434 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.971372 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.971472 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972096 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972191 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972269 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9kk\" (UniqueName: \"kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972385 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972545 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972595 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.972641 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data\") pod \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\" (UID: \"440fdd61-ad16-4ee7-bf64-2754db1c5db8\") " Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.973193 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.973790 4729 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.974207 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data" (OuterVolumeSpecName: "config-data") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.993088 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.997795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk" (OuterVolumeSpecName: "kube-api-access-tn9kk") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "kube-api-access-tn9kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:08 crc kubenswrapper[4729]: I0127 15:57:08.998174 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.012175 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.016580 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.033696 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.052249 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:57:09 crc kubenswrapper[4729]: E0127 15:57:09.053157 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.060369 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "440fdd61-ad16-4ee7-bf64-2754db1c5db8" (UID: "440fdd61-ad16-4ee7-bf64-2754db1c5db8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077575 4729 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077646 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077689 4729 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/440fdd61-ad16-4ee7-bf64-2754db1c5db8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077708 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077739 4729 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077783 4729 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/440fdd61-ad16-4ee7-bf64-2754db1c5db8-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077799 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9kk\" (UniqueName: \"kubernetes.io/projected/440fdd61-ad16-4ee7-bf64-2754db1c5db8-kube-api-access-tn9kk\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.077811 4729 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/440fdd61-ad16-4ee7-bf64-2754db1c5db8-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.103791 4729 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.179224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"440fdd61-ad16-4ee7-bf64-2754db1c5db8","Type":"ContainerDied","Data":"2c8e3bb7cc7912b4c4ac9b519a7a7d38a35fc081b0e9a01660dc302053d6d25a"} Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.179269 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8e3bb7cc7912b4c4ac9b519a7a7d38a35fc081b0e9a01660dc302053d6d25a" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.179353 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 27 15:57:09 crc kubenswrapper[4729]: I0127 15:57:09.180207 4729 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.820045 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:57:12 crc kubenswrapper[4729]: E0127 15:57:12.821298 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="registry-server" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821336 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="registry-server" Jan 27 15:57:12 crc kubenswrapper[4729]: E0127 15:57:12.821360 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="extract-utilities" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821367 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="extract-utilities" Jan 27 15:57:12 crc kubenswrapper[4729]: E0127 15:57:12.821380 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="extract-content" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821388 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="extract-content" Jan 27 15:57:12 crc kubenswrapper[4729]: E0127 15:57:12.821408 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821416 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821701 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="440fdd61-ad16-4ee7-bf64-2754db1c5db8" containerName="tempest-tests-tempest-tests-runner" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.821732 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8924fb-822e-4b94-b7be-5d1bec5ded33" containerName="registry-server" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.822690 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.835678 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.837787 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nbsgt" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.977719 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:12 crc kubenswrapper[4729]: I0127 15:57:12.978166 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zbv\" (UniqueName: \"kubernetes.io/projected/d639ea1e-42fa-4467-9d0c-1f66c65c108f-kube-api-access-h4zbv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.081069 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.081210 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zbv\" (UniqueName: \"kubernetes.io/projected/d639ea1e-42fa-4467-9d0c-1f66c65c108f-kube-api-access-h4zbv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.083577 4729 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.108699 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zbv\" (UniqueName: \"kubernetes.io/projected/d639ea1e-42fa-4467-9d0c-1f66c65c108f-kube-api-access-h4zbv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.118461 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"d639ea1e-42fa-4467-9d0c-1f66c65c108f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.153503 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.650124 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 27 15:57:13 crc kubenswrapper[4729]: I0127 15:57:13.654827 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 15:57:14 crc kubenswrapper[4729]: I0127 15:57:14.230278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d639ea1e-42fa-4467-9d0c-1f66c65c108f","Type":"ContainerStarted","Data":"b710b35e3e518a2eadc704ea3a2749542347c9735950b1ef48cd90b2e2ea1b83"} Jan 27 15:57:15 crc kubenswrapper[4729]: I0127 15:57:15.246861 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"d639ea1e-42fa-4467-9d0c-1f66c65c108f","Type":"ContainerStarted","Data":"1d59cf81922694e8b15eb3fe2de26c0c02682765312879de2c582809360a1c40"} Jan 27 15:57:15 crc kubenswrapper[4729]: I0127 15:57:15.262863 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.159400578 podStartE2EDuration="3.262838049s" podCreationTimestamp="2026-01-27 15:57:12 +0000 UTC" firstStartedPulling="2026-01-27 15:57:13.654576488 +0000 UTC m=+6720.238767492" lastFinishedPulling="2026-01-27 15:57:14.758013959 +0000 UTC m=+6721.342204963" observedRunningTime="2026-01-27 15:57:15.262684485 +0000 UTC m=+6721.846875499" watchObservedRunningTime="2026-01-27 15:57:15.262838049 +0000 UTC m=+6721.847029113" Jan 27 15:57:24 crc kubenswrapper[4729]: I0127 15:57:24.064435 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:57:24 crc kubenswrapper[4729]: E0127 15:57:24.065167 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:57:27 crc kubenswrapper[4729]: I0127 15:57:27.980414 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:27 crc kubenswrapper[4729]: I0127 15:57:27.984014 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.027788 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.087298 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxb6n\" (UniqueName: \"kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.087666 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.088038 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.190747 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.191188 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxb6n\" (UniqueName: \"kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.191368 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.191475 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.192193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.222729 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxb6n\" (UniqueName: \"kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n\") pod \"community-operators-lz4zv\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.315064 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:28 crc kubenswrapper[4729]: I0127 15:57:28.830343 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:29 crc kubenswrapper[4729]: I0127 15:57:29.411923 4729 generic.go:334] "Generic (PLEG): container finished" podID="66a40259-a853-4a93-a451-58ea1283cafc" containerID="6732a0269972e737ee37cd171662604ec32fa3fd7f76c28d77bda78ed5e0839f" exitCode=0 Jan 27 15:57:29 crc kubenswrapper[4729]: I0127 15:57:29.412101 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerDied","Data":"6732a0269972e737ee37cd171662604ec32fa3fd7f76c28d77bda78ed5e0839f"} Jan 27 15:57:29 crc kubenswrapper[4729]: I0127 15:57:29.412223 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerStarted","Data":"215e8b5b6ad60234f44571f5c115ee88fba5e5fbd3c13906049e8a1e0e647984"} Jan 27 15:57:30 crc kubenswrapper[4729]: I0127 15:57:30.424735 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerStarted","Data":"1ea5baa17ddb2fe47f5ca20ce47448c513b93884993279d0dfccda31ce58e8a6"} Jan 27 15:57:33 crc kubenswrapper[4729]: I0127 15:57:33.458555 4729 generic.go:334] "Generic (PLEG): container finished" podID="66a40259-a853-4a93-a451-58ea1283cafc" containerID="1ea5baa17ddb2fe47f5ca20ce47448c513b93884993279d0dfccda31ce58e8a6" exitCode=0 Jan 27 15:57:33 crc kubenswrapper[4729]: I0127 15:57:33.458609 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerDied","Data":"1ea5baa17ddb2fe47f5ca20ce47448c513b93884993279d0dfccda31ce58e8a6"} Jan 27 15:57:34 crc kubenswrapper[4729]: I0127 15:57:34.482745 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerStarted","Data":"5fb62e26e9ac576533f2a575461628b6d547a067abeb7d907a1bf8684370b710"} Jan 27 15:57:35 crc kubenswrapper[4729]: I0127 15:57:35.051270 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:57:35 crc kubenswrapper[4729]: E0127 15:57:35.051957 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:57:38 crc kubenswrapper[4729]: I0127 15:57:38.315604 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:38 crc kubenswrapper[4729]: I0127 15:57:38.316175 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:39 crc kubenswrapper[4729]: I0127 15:57:39.368820 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lz4zv" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="registry-server" probeResult="failure" output=< Jan 27 15:57:39 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 15:57:39 crc kubenswrapper[4729]: > Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.609814 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lz4zv" podStartSLOduration=13.964745287 podStartE2EDuration="18.609785682s" podCreationTimestamp="2026-01-27 15:57:27 +0000 UTC" firstStartedPulling="2026-01-27 15:57:29.416073836 +0000 UTC m=+6736.000264840" lastFinishedPulling="2026-01-27 15:57:34.061114231 +0000 UTC m=+6740.645305235" observedRunningTime="2026-01-27 15:57:34.506187749 +0000 UTC m=+6741.090378773" watchObservedRunningTime="2026-01-27 15:57:45.609785682 +0000 UTC m=+6752.193976706" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.614279 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7js5l/must-gather-df56w"] Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.617234 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.621387 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-7js5l"/"default-dockercfg-zxlg7" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.621495 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7js5l"/"openshift-service-ca.crt" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.621581 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7js5l"/"kube-root-ca.crt" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.743188 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7js5l/must-gather-df56w"] Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.746748 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55bf\" (UniqueName: \"kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.747013 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.850864 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55bf\" (UniqueName: \"kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.851025 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.860435 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.914358 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55bf\" (UniqueName: \"kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf\") pod \"must-gather-df56w\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:45 crc kubenswrapper[4729]: I0127 15:57:45.944993 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 15:57:46 crc kubenswrapper[4729]: I0127 15:57:46.052360 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:57:46 crc kubenswrapper[4729]: E0127 15:57:46.052648 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:57:46 crc kubenswrapper[4729]: I0127 15:57:46.556654 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7js5l/must-gather-df56w"] Jan 27 15:57:46 crc kubenswrapper[4729]: I0127 15:57:46.614492 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/must-gather-df56w" event={"ID":"a4c065cc-3ae3-4d70-ba51-4888709048c9","Type":"ContainerStarted","Data":"aa87c3edfc0948daaad7ea777bd1b06054f804db4bc46f02548bd39b03745f17"} Jan 27 15:57:48 crc kubenswrapper[4729]: I0127 15:57:48.381396 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:48 crc kubenswrapper[4729]: I0127 15:57:48.449632 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:48 crc kubenswrapper[4729]: I0127 15:57:48.620629 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:49 crc kubenswrapper[4729]: I0127 15:57:49.663613 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lz4zv" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="registry-server" containerID="cri-o://5fb62e26e9ac576533f2a575461628b6d547a067abeb7d907a1bf8684370b710" gracePeriod=2 Jan 27 15:57:50 crc kubenswrapper[4729]: I0127 15:57:50.676007 4729 generic.go:334] "Generic (PLEG): container finished" podID="66a40259-a853-4a93-a451-58ea1283cafc" containerID="5fb62e26e9ac576533f2a575461628b6d547a067abeb7d907a1bf8684370b710" exitCode=0 Jan 27 15:57:50 crc kubenswrapper[4729]: I0127 15:57:50.676103 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerDied","Data":"5fb62e26e9ac576533f2a575461628b6d547a067abeb7d907a1bf8684370b710"} Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.088984 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.189481 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxb6n\" (UniqueName: \"kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n\") pod \"66a40259-a853-4a93-a451-58ea1283cafc\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.189594 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities\") pod \"66a40259-a853-4a93-a451-58ea1283cafc\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.189906 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content\") pod \"66a40259-a853-4a93-a451-58ea1283cafc\" (UID: \"66a40259-a853-4a93-a451-58ea1283cafc\") " Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.191954 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities" (OuterVolumeSpecName: "utilities") pod "66a40259-a853-4a93-a451-58ea1283cafc" (UID: "66a40259-a853-4a93-a451-58ea1283cafc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.200207 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n" (OuterVolumeSpecName: "kube-api-access-wxb6n") pod "66a40259-a853-4a93-a451-58ea1283cafc" (UID: "66a40259-a853-4a93-a451-58ea1283cafc"). InnerVolumeSpecName "kube-api-access-wxb6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.252321 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a40259-a853-4a93-a451-58ea1283cafc" (UID: "66a40259-a853-4a93-a451-58ea1283cafc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.294844 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxb6n\" (UniqueName: \"kubernetes.io/projected/66a40259-a853-4a93-a451-58ea1283cafc-kube-api-access-wxb6n\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.295277 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.295410 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a40259-a853-4a93-a451-58ea1283cafc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.726253 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lz4zv" event={"ID":"66a40259-a853-4a93-a451-58ea1283cafc","Type":"ContainerDied","Data":"215e8b5b6ad60234f44571f5c115ee88fba5e5fbd3c13906049e8a1e0e647984"} Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.726544 4729 scope.go:117] "RemoveContainer" containerID="5fb62e26e9ac576533f2a575461628b6d547a067abeb7d907a1bf8684370b710" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.726278 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lz4zv" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.729152 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/must-gather-df56w" event={"ID":"a4c065cc-3ae3-4d70-ba51-4888709048c9","Type":"ContainerStarted","Data":"1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4"} Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.729224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/must-gather-df56w" event={"ID":"a4c065cc-3ae3-4d70-ba51-4888709048c9","Type":"ContainerStarted","Data":"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719"} Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.743985 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7js5l/must-gather-df56w" podStartSLOduration=2.637176654 podStartE2EDuration="9.743964486s" podCreationTimestamp="2026-01-27 15:57:45 +0000 UTC" firstStartedPulling="2026-01-27 15:57:46.568410261 +0000 UTC m=+6753.152601265" lastFinishedPulling="2026-01-27 15:57:53.675198093 +0000 UTC m=+6760.259389097" observedRunningTime="2026-01-27 15:57:54.743242047 +0000 UTC m=+6761.327433051" watchObservedRunningTime="2026-01-27 15:57:54.743964486 +0000 UTC m=+6761.328155510" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.766148 4729 scope.go:117] "RemoveContainer" containerID="1ea5baa17ddb2fe47f5ca20ce47448c513b93884993279d0dfccda31ce58e8a6" Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.770802 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.781868 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lz4zv"] Jan 27 15:57:54 crc kubenswrapper[4729]: I0127 15:57:54.796403 4729 scope.go:117] "RemoveContainer" containerID="6732a0269972e737ee37cd171662604ec32fa3fd7f76c28d77bda78ed5e0839f" Jan 27 15:57:56 crc kubenswrapper[4729]: I0127 15:57:56.064322 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a40259-a853-4a93-a451-58ea1283cafc" path="/var/lib/kubelet/pods/66a40259-a853-4a93-a451-58ea1283cafc/volumes" Jan 27 15:57:59 crc kubenswrapper[4729]: I0127 15:57:59.051881 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:57:59 crc kubenswrapper[4729]: E0127 15:57:59.052458 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.357326 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7js5l/crc-debug-pkflk"] Jan 27 15:58:00 crc kubenswrapper[4729]: E0127 15:58:00.358288 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="extract-utilities" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.358307 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="extract-utilities" Jan 27 15:58:00 crc kubenswrapper[4729]: E0127 15:58:00.358332 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="extract-content" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.358340 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="extract-content" Jan 27 15:58:00 crc kubenswrapper[4729]: E0127 15:58:00.358387 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="registry-server" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.358397 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="registry-server" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.358706 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a40259-a853-4a93-a451-58ea1283cafc" containerName="registry-server" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.360428 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.542884 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.543015 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29qg\" (UniqueName: \"kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.645296 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.645376 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29qg\" (UniqueName: \"kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.646369 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.672497 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29qg\" (UniqueName: \"kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg\") pod \"crc-debug-pkflk\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.681312 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:58:00 crc kubenswrapper[4729]: W0127 15:58:00.731045 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a67642_fad0_484b_923c_f95379380536.slice/crio-d58e9e7f20b7333376c2dd9950a8f25a23f0410f2bea6334ea836db178b5ba32 WatchSource:0}: Error finding container d58e9e7f20b7333376c2dd9950a8f25a23f0410f2bea6334ea836db178b5ba32: Status 404 returned error can't find the container with id d58e9e7f20b7333376c2dd9950a8f25a23f0410f2bea6334ea836db178b5ba32 Jan 27 15:58:00 crc kubenswrapper[4729]: I0127 15:58:00.817484 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-pkflk" event={"ID":"01a67642-fad0-484b-923c-f95379380536","Type":"ContainerStarted","Data":"d58e9e7f20b7333376c2dd9950a8f25a23f0410f2bea6334ea836db178b5ba32"} Jan 27 15:58:11 crc kubenswrapper[4729]: I0127 15:58:11.051557 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:58:11 crc kubenswrapper[4729]: E0127 15:58:11.052308 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:58:15 crc kubenswrapper[4729]: I0127 15:58:15.006761 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-pkflk" event={"ID":"01a67642-fad0-484b-923c-f95379380536","Type":"ContainerStarted","Data":"2e606a8947a9502e548cc166e6cd660647da6b179be43b592ab4fdda32a7c0d8"} Jan 27 15:58:15 crc kubenswrapper[4729]: I0127 15:58:15.031334 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7js5l/crc-debug-pkflk" podStartSLOduration=1.850520021 podStartE2EDuration="15.031314144s" podCreationTimestamp="2026-01-27 15:58:00 +0000 UTC" firstStartedPulling="2026-01-27 15:58:00.735805592 +0000 UTC m=+6767.319996596" lastFinishedPulling="2026-01-27 15:58:13.916599715 +0000 UTC m=+6780.500790719" observedRunningTime="2026-01-27 15:58:15.019905143 +0000 UTC m=+6781.604096147" watchObservedRunningTime="2026-01-27 15:58:15.031314144 +0000 UTC m=+6781.615505148" Jan 27 15:58:22 crc kubenswrapper[4729]: I0127 15:58:22.055292 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:58:22 crc kubenswrapper[4729]: E0127 15:58:22.067033 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:58:33 crc kubenswrapper[4729]: I0127 15:58:33.051606 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:58:33 crc kubenswrapper[4729]: E0127 15:58:33.054546 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:58:47 crc kubenswrapper[4729]: I0127 15:58:47.051422 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:58:47 crc kubenswrapper[4729]: E0127 15:58:47.052203 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:59:02 crc kubenswrapper[4729]: I0127 15:59:02.051334 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:59:02 crc kubenswrapper[4729]: E0127 15:59:02.052307 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:59:11 crc kubenswrapper[4729]: I0127 15:59:11.672170 4729 generic.go:334] "Generic (PLEG): container finished" podID="01a67642-fad0-484b-923c-f95379380536" containerID="2e606a8947a9502e548cc166e6cd660647da6b179be43b592ab4fdda32a7c0d8" exitCode=0 Jan 27 15:59:11 crc kubenswrapper[4729]: I0127 15:59:11.672278 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-pkflk" event={"ID":"01a67642-fad0-484b-923c-f95379380536","Type":"ContainerDied","Data":"2e606a8947a9502e548cc166e6cd660647da6b179be43b592ab4fdda32a7c0d8"} Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.850291 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.893682 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-pkflk"] Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.905280 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-pkflk"] Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.917822 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host\") pod \"01a67642-fad0-484b-923c-f95379380536\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.917932 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29qg\" (UniqueName: \"kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg\") pod \"01a67642-fad0-484b-923c-f95379380536\" (UID: \"01a67642-fad0-484b-923c-f95379380536\") " Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.918027 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host" (OuterVolumeSpecName: "host") pod "01a67642-fad0-484b-923c-f95379380536" (UID: "01a67642-fad0-484b-923c-f95379380536"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.918731 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/01a67642-fad0-484b-923c-f95379380536-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:12 crc kubenswrapper[4729]: I0127 15:59:12.935547 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg" (OuterVolumeSpecName: "kube-api-access-d29qg") pod "01a67642-fad0-484b-923c-f95379380536" (UID: "01a67642-fad0-484b-923c-f95379380536"). InnerVolumeSpecName "kube-api-access-d29qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:59:13 crc kubenswrapper[4729]: I0127 15:59:13.020743 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d29qg\" (UniqueName: \"kubernetes.io/projected/01a67642-fad0-484b-923c-f95379380536-kube-api-access-d29qg\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:13 crc kubenswrapper[4729]: I0127 15:59:13.704805 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58e9e7f20b7333376c2dd9950a8f25a23f0410f2bea6334ea836db178b5ba32" Jan 27 15:59:13 crc kubenswrapper[4729]: I0127 15:59:13.705285 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-pkflk" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.067988 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a67642-fad0-484b-923c-f95379380536" path="/var/lib/kubelet/pods/01a67642-fad0-484b-923c-f95379380536/volumes" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.097896 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tz78x"] Jan 27 15:59:14 crc kubenswrapper[4729]: E0127 15:59:14.098440 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a67642-fad0-484b-923c-f95379380536" containerName="container-00" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.098461 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a67642-fad0-484b-923c-f95379380536" containerName="container-00" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.099153 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a67642-fad0-484b-923c-f95379380536" containerName="container-00" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.100311 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.146611 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscw8\" (UniqueName: \"kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.146678 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.249647 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscw8\" (UniqueName: \"kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.249732 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.249936 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.271994 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscw8\" (UniqueName: \"kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8\") pod \"crc-debug-tz78x\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.419166 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:14 crc kubenswrapper[4729]: I0127 15:59:14.717541 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-tz78x" event={"ID":"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39","Type":"ContainerStarted","Data":"560e98f114b7f267dded55d140557495c0f48463925ad883ab90ce7fbe701459"} Jan 27 15:59:15 crc kubenswrapper[4729]: I0127 15:59:15.051900 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:59:15 crc kubenswrapper[4729]: E0127 15:59:15.052547 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 15:59:15 crc kubenswrapper[4729]: I0127 15:59:15.731834 4729 generic.go:334] "Generic (PLEG): container finished" podID="2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" containerID="04347b3ad967887b8398f8dd042c3acc42044c6b0e1d029e2583bb00d3f0bfb4" exitCode=0 Jan 27 15:59:15 crc kubenswrapper[4729]: I0127 15:59:15.731927 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-tz78x" event={"ID":"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39","Type":"ContainerDied","Data":"04347b3ad967887b8398f8dd042c3acc42044c6b0e1d029e2583bb00d3f0bfb4"} Jan 27 15:59:16 crc kubenswrapper[4729]: I0127 15:59:16.901634 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.017141 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host\") pod \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.017219 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host" (OuterVolumeSpecName: "host") pod "2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" (UID: "2cad91c4-1b5a-4a9d-845c-8fe5831a7a39"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.017276 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sscw8\" (UniqueName: \"kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8\") pod \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\" (UID: \"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39\") " Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.018054 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.023044 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8" (OuterVolumeSpecName: "kube-api-access-sscw8") pod "2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" (UID: "2cad91c4-1b5a-4a9d-845c-8fe5831a7a39"). InnerVolumeSpecName "kube-api-access-sscw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.119710 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sscw8\" (UniqueName: \"kubernetes.io/projected/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39-kube-api-access-sscw8\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.771659 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-tz78x" event={"ID":"2cad91c4-1b5a-4a9d-845c-8fe5831a7a39","Type":"ContainerDied","Data":"560e98f114b7f267dded55d140557495c0f48463925ad883ab90ce7fbe701459"} Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.772066 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560e98f114b7f267dded55d140557495c0f48463925ad883ab90ce7fbe701459" Jan 27 15:59:17 crc kubenswrapper[4729]: I0127 15:59:17.771720 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tz78x" Jan 27 15:59:18 crc kubenswrapper[4729]: I0127 15:59:18.092975 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tz78x"] Jan 27 15:59:18 crc kubenswrapper[4729]: I0127 15:59:18.103676 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tz78x"] Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.259786 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tr7bj"] Jan 27 15:59:19 crc kubenswrapper[4729]: E0127 15:59:19.260593 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" containerName="container-00" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.260606 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" containerName="container-00" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.260848 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" containerName="container-00" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.261681 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.374967 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l877h\" (UniqueName: \"kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.375022 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.477978 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l877h\" (UniqueName: \"kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.478021 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.478227 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.501193 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l877h\" (UniqueName: \"kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h\") pod \"crc-debug-tr7bj\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.582925 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:19 crc kubenswrapper[4729]: W0127 15:59:19.626782 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a46ada5_4355_4b8f_9df6_3e726fb90c05.slice/crio-617c9d77932389966f1395e7ed14bc44b8c9c2a6787c9bdee20a269e1a671b44 WatchSource:0}: Error finding container 617c9d77932389966f1395e7ed14bc44b8c9c2a6787c9bdee20a269e1a671b44: Status 404 returned error can't find the container with id 617c9d77932389966f1395e7ed14bc44b8c9c2a6787c9bdee20a269e1a671b44 Jan 27 15:59:19 crc kubenswrapper[4729]: I0127 15:59:19.799325 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" event={"ID":"1a46ada5-4355-4b8f-9df6-3e726fb90c05","Type":"ContainerStarted","Data":"617c9d77932389966f1395e7ed14bc44b8c9c2a6787c9bdee20a269e1a671b44"} Jan 27 15:59:20 crc kubenswrapper[4729]: I0127 15:59:20.067283 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cad91c4-1b5a-4a9d-845c-8fe5831a7a39" path="/var/lib/kubelet/pods/2cad91c4-1b5a-4a9d-845c-8fe5831a7a39/volumes" Jan 27 15:59:20 crc kubenswrapper[4729]: I0127 15:59:20.818152 4729 generic.go:334] "Generic (PLEG): container finished" podID="1a46ada5-4355-4b8f-9df6-3e726fb90c05" containerID="cdbde2157f8c9eb17568d376a653fcb0e3b9eee116cde98ec00468fdad4d5ba9" exitCode=0 Jan 27 15:59:20 crc kubenswrapper[4729]: I0127 15:59:20.818203 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" event={"ID":"1a46ada5-4355-4b8f-9df6-3e726fb90c05","Type":"ContainerDied","Data":"cdbde2157f8c9eb17568d376a653fcb0e3b9eee116cde98ec00468fdad4d5ba9"} Jan 27 15:59:20 crc kubenswrapper[4729]: I0127 15:59:20.885549 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tr7bj"] Jan 27 15:59:20 crc kubenswrapper[4729]: I0127 15:59:20.901125 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7js5l/crc-debug-tr7bj"] Jan 27 15:59:21 crc kubenswrapper[4729]: I0127 15:59:21.966500 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.144686 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l877h\" (UniqueName: \"kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h\") pod \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.144918 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host\") pod \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\" (UID: \"1a46ada5-4355-4b8f-9df6-3e726fb90c05\") " Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.144968 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host" (OuterVolumeSpecName: "host") pod "1a46ada5-4355-4b8f-9df6-3e726fb90c05" (UID: "1a46ada5-4355-4b8f-9df6-3e726fb90c05"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.147646 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1a46ada5-4355-4b8f-9df6-3e726fb90c05-host\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.152954 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h" (OuterVolumeSpecName: "kube-api-access-l877h") pod "1a46ada5-4355-4b8f-9df6-3e726fb90c05" (UID: "1a46ada5-4355-4b8f-9df6-3e726fb90c05"). InnerVolumeSpecName "kube-api-access-l877h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.249718 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l877h\" (UniqueName: \"kubernetes.io/projected/1a46ada5-4355-4b8f-9df6-3e726fb90c05-kube-api-access-l877h\") on node \"crc\" DevicePath \"\"" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.845112 4729 scope.go:117] "RemoveContainer" containerID="cdbde2157f8c9eb17568d376a653fcb0e3b9eee116cde98ec00468fdad4d5ba9" Jan 27 15:59:22 crc kubenswrapper[4729]: I0127 15:59:22.845312 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/crc-debug-tr7bj" Jan 27 15:59:24 crc kubenswrapper[4729]: I0127 15:59:24.089651 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a46ada5-4355-4b8f-9df6-3e726fb90c05" path="/var/lib/kubelet/pods/1a46ada5-4355-4b8f-9df6-3e726fb90c05/volumes" Jan 27 15:59:28 crc kubenswrapper[4729]: I0127 15:59:28.051272 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 15:59:28 crc kubenswrapper[4729]: I0127 15:59:28.906646 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75"} Jan 27 15:59:49 crc kubenswrapper[4729]: I0127 15:59:49.738869 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-api/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.043819 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-evaluator/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.100648 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-listener/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.167380 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-notifier/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.341017 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b46b856c4-72fkv_260a1ff1-928b-446f-9480-fb8d8fe342f1/barbican-api/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.422109 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b46b856c4-72fkv_260a1ff1-928b-446f-9480-fb8d8fe342f1/barbican-api-log/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.608316 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74fdd6f9c6-65ljj_0c7b3357-e7e9-415b-8253-7ee68b4149a0/barbican-keystone-listener/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.760050 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74fdd6f9c6-65ljj_0c7b3357-e7e9-415b-8253-7ee68b4149a0/barbican-keystone-listener-log/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.898693 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8475c76cbc-gtz96_3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400/barbican-worker-log/0.log" Jan 27 15:59:50 crc kubenswrapper[4729]: I0127 15:59:50.901847 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8475c76cbc-gtz96_3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400/barbican-worker/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.104903 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww_9795f0ec-6b8d-4470-bd63-584192019fcf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.328180 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/ceilometer-central-agent/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.381436 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/ceilometer-notification-agent/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.426689 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/proxy-httpd/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.523422 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/sg-core/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.760511 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_28e34a55-2a5a-4da3-8f4e-ece70df636e2/cinder-api/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.766257 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_28e34a55-2a5a-4da3-8f4e-ece70df636e2/cinder-api-log/0.log" Jan 27 15:59:51 crc kubenswrapper[4729]: I0127 15:59:51.992327 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/cinder-scheduler/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.043860 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/probe/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.246241 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-28rl7_32728732-3b43-4a8e-9f61-f028fd4b3d74/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.379888 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t_6cc5ced3-d419-4224-a474-bd34874d18dc/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.488977 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/init/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.758462 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/init/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.758520 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/dnsmasq-dns/0.log" Jan 27 15:59:52 crc kubenswrapper[4729]: I0127 15:59:52.790482 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g_657d96d8-d313-4860-acae-64d35608cd5d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:53 crc kubenswrapper[4729]: I0127 15:59:53.315135 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2132f727-3016-42f6-ba30-864e70540513/glance-httpd/0.log" Jan 27 15:59:53 crc kubenswrapper[4729]: I0127 15:59:53.315719 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2132f727-3016-42f6-ba30-864e70540513/glance-log/0.log" Jan 27 15:59:53 crc kubenswrapper[4729]: I0127 15:59:53.625052 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e371e969-2ec7-42fe-95bd-5765dc511224/glance-httpd/0.log" Jan 27 15:59:53 crc kubenswrapper[4729]: I0127 15:59:53.671425 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e371e969-2ec7-42fe-95bd-5765dc511224/glance-log/0.log" Jan 27 15:59:54 crc kubenswrapper[4729]: I0127 15:59:54.543579 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5ff89df78c-6425l_be6cee48-8743-49f2-a13b-6ce80981cfdb/heat-api/0.log" Jan 27 15:59:54 crc kubenswrapper[4729]: I0127 15:59:54.669443 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-795d549794-t2xb4_5271075b-f655-47d8-b621-44711d9e495c/heat-engine/0.log" Jan 27 15:59:54 crc kubenswrapper[4729]: I0127 15:59:54.746598 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr_2121d941-3524-4b71-ac16-41f4679e3525/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:54 crc kubenswrapper[4729]: I0127 15:59:54.762771 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7fff5b4d49-29sw4_91704ade-1ead-4e59-b743-f93c932a4450/heat-cfnapi/0.log" Jan 27 15:59:54 crc kubenswrapper[4729]: I0127 15:59:54.989669 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7dfxr_ebfb952d-e5d5-4ce8-9eb7-49f058023970/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:55 crc kubenswrapper[4729]: I0127 15:59:55.319325 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492101-z8cpg_1c34a1bb-cfec-4b86-af1a-b633dd398427/keystone-cron/0.log" Jan 27 15:59:55 crc kubenswrapper[4729]: I0127 15:59:55.484221 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51fc79dc-e632-414e-a354-54c3bfd2eb8d/kube-state-metrics/0.log" Jan 27 15:59:55 crc kubenswrapper[4729]: I0127 15:59:55.540618 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79f67449f-t7hgq_66c77355-de9a-4aab-8d65-504c74911382/keystone-api/0.log" Jan 27 15:59:55 crc kubenswrapper[4729]: I0127 15:59:55.653948 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw_33c4c74a-3a24-43e4-94ff-84a794d0db7d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:55 crc kubenswrapper[4729]: I0127 15:59:55.761558 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-b4nnk_e771e774-7470-4b36-a60e-bab34a04185a/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:56 crc kubenswrapper[4729]: I0127 15:59:56.091323 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_3c869358-ae88-4f4a-9317-4e1176fdb199/mysqld-exporter/0.log" Jan 27 15:59:56 crc kubenswrapper[4729]: I0127 15:59:56.398263 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85dcfc7bf5-fs787_8b358632-8eef-4842-91bc-9c69460a5dea/neutron-api/0.log" Jan 27 15:59:56 crc kubenswrapper[4729]: I0127 15:59:56.428541 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85dcfc7bf5-fs787_8b358632-8eef-4842-91bc-9c69460a5dea/neutron-httpd/0.log" Jan 27 15:59:56 crc kubenswrapper[4729]: I0127 15:59:56.752822 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf_067cab76-3d24-4a20-a016-0141d54181a2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:57 crc kubenswrapper[4729]: I0127 15:59:57.400568 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76dbfd5b-82fa-4998-bd3b-6ead39c5f73b/nova-cell0-conductor-conductor/0.log" Jan 27 15:59:57 crc kubenswrapper[4729]: I0127 15:59:57.707357 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_48379ff4-1d0a-400d-a40b-a3ed65415c39/nova-api-log/0.log" Jan 27 15:59:57 crc kubenswrapper[4729]: I0127 15:59:57.783835 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e8c150bd-4541-46f0-8c70-1e5482e6b3f3/nova-cell1-conductor-conductor/0.log" Jan 27 15:59:58 crc kubenswrapper[4729]: I0127 15:59:58.150272 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9201a195-6f0c-4521-a4d9-a31706dbedce/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 15:59:58 crc kubenswrapper[4729]: I0127 15:59:58.180200 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-skhnk_d4f6bdbc-1305-4c66-8d8c-a3425163fd27/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 15:59:58 crc kubenswrapper[4729]: I0127 15:59:58.283130 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_48379ff4-1d0a-400d-a40b-a3ed65415c39/nova-api-api/0.log" Jan 27 15:59:58 crc kubenswrapper[4729]: I0127 15:59:58.580482 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5945a095-d047-46d4-aa7d-3989268e88f9/nova-metadata-log/0.log" Jan 27 15:59:58 crc kubenswrapper[4729]: I0127 15:59:58.983207 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/mysql-bootstrap/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.064011 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_96379486-5600-4752-9729-0fc090685ea4/nova-scheduler-scheduler/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.272075 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/galera/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.304452 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/mysql-bootstrap/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.529903 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/mysql-bootstrap/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.785885 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/mysql-bootstrap/0.log" Jan 27 15:59:59 crc kubenswrapper[4729]: I0127 15:59:59.869615 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/galera/0.log" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.024305 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0b4b3ce4-58fb-430f-8465-ca0a501a6aba/openstackclient/0.log" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.267299 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gk2cz_5e4b5a47-ff01-4fd6-b69f-4d70efc77a12/ovn-controller/0.log" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.273335 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp"] Jan 27 16:00:00 crc kubenswrapper[4729]: E0127 16:00:00.273822 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a46ada5-4355-4b8f-9df6-3e726fb90c05" containerName="container-00" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.273840 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a46ada5-4355-4b8f-9df6-3e726fb90c05" containerName="container-00" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.274089 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a46ada5-4355-4b8f-9df6-3e726fb90c05" containerName="container-00" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.275018 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.293710 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.319734 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.361705 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxst9\" (UniqueName: \"kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.361766 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.361802 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.406011 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp"] Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.464099 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxst9\" (UniqueName: \"kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.464188 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.464264 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.465092 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.499052 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.499102 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxst9\" (UniqueName: \"kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9\") pod \"collect-profiles-29492160-clhfp\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.502047 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ngktk_0eab35a0-e5dd-4c49-9d7b-9f8f0722e754/openstack-network-exporter/0.log" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.611042 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:00 crc kubenswrapper[4729]: I0127 16:00:00.731360 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server-init/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.489763 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5945a095-d047-46d4-aa7d-3989268e88f9/nova-metadata-metadata/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.545054 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server-init/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.569186 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovs-vswitchd/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.641419 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.872139 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mqwsj_78f36cea-77c5-44dd-9952-6392811d2d40/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:01 crc kubenswrapper[4729]: I0127 16:00:01.998133 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c1da68b-399e-4543-918f-6deed78e3626/openstack-network-exporter/0.log" Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.144972 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp"] Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.253847 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c1da68b-399e-4543-918f-6deed78e3626/ovn-northd/0.log" Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.359577 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" event={"ID":"e6b8732e-5a9e-404b-882f-55eae3a53fa5","Type":"ContainerStarted","Data":"d60a7779a906d633ff8c673d8230b78c93c866615e076cb42d172dd4d4ba6d10"} Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.411963 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_37a67feb-a317-4a04-af97-028064ca39da/openstack-network-exporter/0.log" Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.431945 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_37a67feb-a317-4a04-af97-028064ca39da/ovsdbserver-nb/0.log" Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.698759 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3290bc53-f838-4b2f-9f5a-053331751546/openstack-network-exporter/0.log" Jan 27 16:00:02 crc kubenswrapper[4729]: I0127 16:00:02.783771 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3290bc53-f838-4b2f-9f5a-053331751546/ovsdbserver-sb/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.130103 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/init-config-reloader/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.198317 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb844499b-jdr2d_6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2/placement-api/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.269707 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb844499b-jdr2d_6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2/placement-log/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.375977 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" event={"ID":"e6b8732e-5a9e-404b-882f-55eae3a53fa5","Type":"ContainerStarted","Data":"a91cbb59613862de523f76db225a02f80610ba4eb2ceca4c1cf4ecf570fba81d"} Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.405997 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" podStartSLOduration=3.405978332 podStartE2EDuration="3.405978332s" podCreationTimestamp="2026-01-27 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:00:03.393095202 +0000 UTC m=+6889.977286216" watchObservedRunningTime="2026-01-27 16:00:03.405978332 +0000 UTC m=+6889.990169336" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.456313 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/init-config-reloader/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.559481 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/config-reloader/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.575272 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/prometheus/0.log" Jan 27 16:00:03 crc kubenswrapper[4729]: I0127 16:00:03.811004 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/thanos-sidecar/0.log" Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.045670 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/setup-container/0.log" Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.392324 4729 generic.go:334] "Generic (PLEG): container finished" podID="e6b8732e-5a9e-404b-882f-55eae3a53fa5" containerID="a91cbb59613862de523f76db225a02f80610ba4eb2ceca4c1cf4ecf570fba81d" exitCode=0 Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.392654 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" event={"ID":"e6b8732e-5a9e-404b-882f-55eae3a53fa5","Type":"ContainerDied","Data":"a91cbb59613862de523f76db225a02f80610ba4eb2ceca4c1cf4ecf570fba81d"} Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.433134 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/setup-container/0.log" Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.434275 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/rabbitmq/0.log" Jan 27 16:00:04 crc kubenswrapper[4729]: I0127 16:00:04.446366 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/setup-container/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.212737 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/setup-container/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.276273 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/rabbitmq/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.499087 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/setup-container/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.856067 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/setup-container/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.858807 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/rabbitmq/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.874226 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/setup-container/0.log" Jan 27 16:00:05 crc kubenswrapper[4729]: I0127 16:00:05.969271 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.074418 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume\") pod \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.074511 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume\") pod \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.074859 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxst9\" (UniqueName: \"kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9\") pod \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\" (UID: \"e6b8732e-5a9e-404b-882f-55eae3a53fa5\") " Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.075516 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume" (OuterVolumeSpecName: "config-volume") pod "e6b8732e-5a9e-404b-882f-55eae3a53fa5" (UID: "e6b8732e-5a9e-404b-882f-55eae3a53fa5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.075768 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b8732e-5a9e-404b-882f-55eae3a53fa5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.093231 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9" (OuterVolumeSpecName: "kube-api-access-bxst9") pod "e6b8732e-5a9e-404b-882f-55eae3a53fa5" (UID: "e6b8732e-5a9e-404b-882f-55eae3a53fa5"). InnerVolumeSpecName "kube-api-access-bxst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.095609 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e6b8732e-5a9e-404b-882f-55eae3a53fa5" (UID: "e6b8732e-5a9e-404b-882f-55eae3a53fa5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.177443 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxst9\" (UniqueName: \"kubernetes.io/projected/e6b8732e-5a9e-404b-882f-55eae3a53fa5-kube-api-access-bxst9\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.177473 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e6b8732e-5a9e-404b-882f-55eae3a53fa5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.306916 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/setup-container/0.log" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.328021 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t_0860b8dc-10f3-41e7-8f6e-231f28f3cea6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.424704 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" event={"ID":"e6b8732e-5a9e-404b-882f-55eae3a53fa5","Type":"ContainerDied","Data":"d60a7779a906d633ff8c673d8230b78c93c866615e076cb42d172dd4d4ba6d10"} Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.424747 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60a7779a906d633ff8c673d8230b78c93c866615e076cb42d172dd4d4ba6d10" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.424801 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492160-clhfp" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.438718 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/rabbitmq/0.log" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.499526 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9"] Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.514399 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492115-q2jd9"] Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.644313 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dtn99_3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:06 crc kubenswrapper[4729]: I0127 16:00:06.720834 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw_90776e5b-71cf-43d1-969b-16278afed3cf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.047121 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qz44w_af66e59e-8967-4730-a4ca-9ff115554d5b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.141777 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2nstj_27dd40db-176b-45b4-a886-967fcb9ce2df/ssh-known-hosts-edpm-deployment/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.573921 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-849d9cdd4f-w5qzz_39caa2da-8dac-4581-8e89-2b7f3b013b8c/proxy-server/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.700091 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-z4znt_fb94bfab-bf68-4e03-9a32-b4de4d765b1f/swift-ring-rebalance/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.832840 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-auditor/0.log" Jan 27 16:00:07 crc kubenswrapper[4729]: I0127 16:00:07.849810 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-849d9cdd4f-w5qzz_39caa2da-8dac-4581-8e89-2b7f3b013b8c/proxy-httpd/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.040762 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-reaper/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.073112 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1b8f26b-903d-4da4-b389-f805f726c63d" path="/var/lib/kubelet/pods/a1b8f26b-903d-4da4-b389-f805f726c63d/volumes" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.115654 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-replicator/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.142232 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-server/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.247204 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-auditor/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.383349 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-server/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.450435 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-replicator/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.459429 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-updater/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.570589 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-auditor/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.928133 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-expirer/0.log" Jan 27 16:00:08 crc kubenswrapper[4729]: I0127 16:00:08.995929 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-server/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.016834 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-replicator/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.041744 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-updater/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.238086 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/rsync/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.256060 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/swift-recon-cron/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.525150 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp_5639c133-4cde-40dc-a7f3-e716aaab5ca8/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:09 crc kubenswrapper[4729]: I0127 16:00:09.793520 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64_0b30179d-d4ac-44b0-9675-7f0ef071caf5/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:10 crc kubenswrapper[4729]: I0127 16:00:10.017232 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d639ea1e-42fa-4467-9d0c-1f66c65c108f/test-operator-logs-container/0.log" Jan 27 16:00:10 crc kubenswrapper[4729]: I0127 16:00:10.346252 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m_78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:00:10 crc kubenswrapper[4729]: I0127 16:00:10.996414 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_440fdd61-ad16-4ee7-bf64-2754db1c5db8/tempest-tests-tempest-tests-runner/0.log" Jan 27 16:00:16 crc kubenswrapper[4729]: I0127 16:00:16.779403 4729 scope.go:117] "RemoveContainer" containerID="0bbf2e0c6297cd8b537147bb11b80d1bd65d40f06d931914f16175e0cc7a3db7" Jan 27 16:00:17 crc kubenswrapper[4729]: I0127 16:00:17.838432 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_041a96ab-9f21-4d02-80df-cf7d6a81323b/memcached/0.log" Jan 27 16:00:44 crc kubenswrapper[4729]: I0127 16:00:44.870377 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.144215 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.188752 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.195330 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.466252 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/extract/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.493261 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.510583 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.874008 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-m7jfx_27edcc9a-7976-42bd-9e8b-a7c95936f305/manager/0.log" Jan 27 16:00:45 crc kubenswrapper[4729]: I0127 16:00:45.915911 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-fcvrz_a27299b3-aeb1-4014-a145-6b5b908542fc/manager/0.log" Jan 27 16:00:46 crc kubenswrapper[4729]: I0127 16:00:46.121052 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4bpwj_6818e775-019d-4bda-94ba-b7e550c9a127/manager/0.log" Jan 27 16:00:46 crc kubenswrapper[4729]: I0127 16:00:46.248056 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-ck286_c9cd8871-5d83-436f-b787-a8769327429d/manager/0.log" Jan 27 16:00:46 crc kubenswrapper[4729]: I0127 16:00:46.929982 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-qlm8l_64a73e98-23a2-4634-ba0f-fcf5389e38e1/manager/0.log" Jan 27 16:00:47 crc kubenswrapper[4729]: I0127 16:00:47.054190 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-5csls_48e0b394-ae44-484e-821f-b821cd11c656/manager/0.log" Jan 27 16:00:47 crc kubenswrapper[4729]: I0127 16:00:47.389754 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-rnnng_53268481-b675-416f-a9d3-343d349e3bb4/manager/0.log" Jan 27 16:00:48 crc kubenswrapper[4729]: I0127 16:00:48.239415 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-h98cg_49b9ec9d-9998-465c-b62f-5c97d5913dd7/manager/0.log" Jan 27 16:00:48 crc kubenswrapper[4729]: I0127 16:00:48.506519 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bfbgb_80666255-494b-4c9a-8434-49c509505a32/manager/0.log" Jan 27 16:00:48 crc kubenswrapper[4729]: I0127 16:00:48.599784 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vm6sf_ef99dd2b-4274-4277-8517-c748ef232c38/manager/0.log" Jan 27 16:00:49 crc kubenswrapper[4729]: I0127 16:00:49.120071 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-hn29j_e0d9910b-f1f9-4f1e-b920-dd1c3c787f78/manager/0.log" Jan 27 16:00:49 crc kubenswrapper[4729]: I0127 16:00:49.191362 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-54vz9_eb48ac92-5355-41a2-bdce-f70e47cb91d9/manager/0.log" Jan 27 16:00:49 crc kubenswrapper[4729]: I0127 16:00:49.210183 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-w79q5_73a5b611-6e78-44bf-94ad-2a1fdf4a4819/manager/0.log" Jan 27 16:00:49 crc kubenswrapper[4729]: I0127 16:00:49.750282 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-dpnb5_d9c052a4-bb18-4634-8acd-13d899dcc8af/manager/0.log" Jan 27 16:00:49 crc kubenswrapper[4729]: I0127 16:00:49.885366 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7_5a91ffd3-fab5-40f6-b808-7d0fd80888aa/manager/0.log" Jan 27 16:00:50 crc kubenswrapper[4729]: I0127 16:00:50.017054 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77cf586fbc-wj4vn_3cda87f1-f88f-4ade-a0fd-d0359a00e665/operator/0.log" Jan 27 16:00:50 crc kubenswrapper[4729]: I0127 16:00:50.291207 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7hw47_19dcb1bc-8570-40b5-9493-349fc2ea4cc0/registry-server/0.log" Jan 27 16:00:50 crc kubenswrapper[4729]: I0127 16:00:50.632757 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-22t59_a393649b-f1a3-44fb-9cb8-a289fcc3f01f/manager/0.log" Jan 27 16:00:50 crc kubenswrapper[4729]: I0127 16:00:50.747123 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-7q6dx_e1ef0def-8b43-404c-a20f-ccffb028796d/manager/0.log" Jan 27 16:00:50 crc kubenswrapper[4729]: I0127 16:00:50.941030 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-khqrr_6e8131d6-585c-43f8-9231-204ef68de1ba/operator/0.log" Jan 27 16:00:51 crc kubenswrapper[4729]: I0127 16:00:51.102333 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-m4z7c_60c59a6c-9eb5-4869-8d50-2cb234912d6b/manager/0.log" Jan 27 16:00:51 crc kubenswrapper[4729]: I0127 16:00:51.495738 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mxmhp_d5d5726b-1680-44de-9752-2e56e45a3d12/manager/0.log" Jan 27 16:00:51 crc kubenswrapper[4729]: I0127 16:00:51.761759 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-9c4lk_7f77a2ce-03ee-4d74-a7df-052255e0f337/manager/0.log" Jan 27 16:00:51 crc kubenswrapper[4729]: I0127 16:00:51.822729 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-858d4757d5-qn8zm_4a222b58-d97f-4d40-9bb4-517b4798eb07/manager/0.log" Jan 27 16:00:51 crc kubenswrapper[4729]: I0127 16:00:51.826050 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66f997549c-st8m2_8eb4b08e-edb2-4db8-af1e-549e8e1396d1/manager/0.log" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.162703 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29492161-ssgd2"] Jan 27 16:01:00 crc kubenswrapper[4729]: E0127 16:01:00.163730 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b8732e-5a9e-404b-882f-55eae3a53fa5" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.163743 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b8732e-5a9e-404b-882f-55eae3a53fa5" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.164014 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b8732e-5a9e-404b-882f-55eae3a53fa5" containerName="collect-profiles" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.166716 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.175860 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492161-ssgd2"] Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.196983 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.197110 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.197248 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x486n\" (UniqueName: \"kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.197289 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.299211 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.299453 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x486n\" (UniqueName: \"kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.299492 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.299562 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.318320 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.318320 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.320043 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.333662 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x486n\" (UniqueName: \"kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n\") pod \"keystone-cron-29492161-ssgd2\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:00 crc kubenswrapper[4729]: I0127 16:01:00.544520 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:01 crc kubenswrapper[4729]: I0127 16:01:01.457153 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29492161-ssgd2"] Jan 27 16:01:02 crc kubenswrapper[4729]: I0127 16:01:02.118383 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-ssgd2" event={"ID":"a6627c66-78f6-432d-83fb-20578f0e7acb","Type":"ContainerStarted","Data":"6f6e06eed863523bea310062999492325bd976be5f0d5449468eb38ea0b384d7"} Jan 27 16:01:02 crc kubenswrapper[4729]: I0127 16:01:02.119239 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-ssgd2" event={"ID":"a6627c66-78f6-432d-83fb-20578f0e7acb","Type":"ContainerStarted","Data":"3ee532bb4f07cf4dc8221e818f347afa9acc3e0979037a9197ebdb76d3c234a0"} Jan 27 16:01:02 crc kubenswrapper[4729]: I0127 16:01:02.142244 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29492161-ssgd2" podStartSLOduration=2.142221532 podStartE2EDuration="2.142221532s" podCreationTimestamp="2026-01-27 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:01:02.135616696 +0000 UTC m=+6948.719807710" watchObservedRunningTime="2026-01-27 16:01:02.142221532 +0000 UTC m=+6948.726412546" Jan 27 16:01:08 crc kubenswrapper[4729]: I0127 16:01:08.200234 4729 generic.go:334] "Generic (PLEG): container finished" podID="a6627c66-78f6-432d-83fb-20578f0e7acb" containerID="6f6e06eed863523bea310062999492325bd976be5f0d5449468eb38ea0b384d7" exitCode=0 Jan 27 16:01:08 crc kubenswrapper[4729]: I0127 16:01:08.200320 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-ssgd2" event={"ID":"a6627c66-78f6-432d-83fb-20578f0e7acb","Type":"ContainerDied","Data":"6f6e06eed863523bea310062999492325bd976be5f0d5449468eb38ea0b384d7"} Jan 27 16:01:09 crc kubenswrapper[4729]: I0127 16:01:09.870716 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:09 crc kubenswrapper[4729]: I0127 16:01:09.991171 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x486n\" (UniqueName: \"kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n\") pod \"a6627c66-78f6-432d-83fb-20578f0e7acb\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " Jan 27 16:01:09 crc kubenswrapper[4729]: I0127 16:01:09.991368 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data\") pod \"a6627c66-78f6-432d-83fb-20578f0e7acb\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " Jan 27 16:01:09 crc kubenswrapper[4729]: I0127 16:01:09.991461 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys\") pod \"a6627c66-78f6-432d-83fb-20578f0e7acb\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " Jan 27 16:01:09 crc kubenswrapper[4729]: I0127 16:01:09.991545 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle\") pod \"a6627c66-78f6-432d-83fb-20578f0e7acb\" (UID: \"a6627c66-78f6-432d-83fb-20578f0e7acb\") " Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.010420 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n" (OuterVolumeSpecName: "kube-api-access-x486n") pod "a6627c66-78f6-432d-83fb-20578f0e7acb" (UID: "a6627c66-78f6-432d-83fb-20578f0e7acb"). InnerVolumeSpecName "kube-api-access-x486n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.013477 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a6627c66-78f6-432d-83fb-20578f0e7acb" (UID: "a6627c66-78f6-432d-83fb-20578f0e7acb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.068649 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6627c66-78f6-432d-83fb-20578f0e7acb" (UID: "a6627c66-78f6-432d-83fb-20578f0e7acb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.077767 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data" (OuterVolumeSpecName: "config-data") pod "a6627c66-78f6-432d-83fb-20578f0e7acb" (UID: "a6627c66-78f6-432d-83fb-20578f0e7acb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.095290 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x486n\" (UniqueName: \"kubernetes.io/projected/a6627c66-78f6-432d-83fb-20578f0e7acb-kube-api-access-x486n\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.095347 4729 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-config-data\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.095360 4729 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.095374 4729 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6627c66-78f6-432d-83fb-20578f0e7acb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.224601 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29492161-ssgd2" event={"ID":"a6627c66-78f6-432d-83fb-20578f0e7acb","Type":"ContainerDied","Data":"3ee532bb4f07cf4dc8221e818f347afa9acc3e0979037a9197ebdb76d3c234a0"} Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.224655 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee532bb4f07cf4dc8221e818f347afa9acc3e0979037a9197ebdb76d3c234a0" Jan 27 16:01:10 crc kubenswrapper[4729]: I0127 16:01:10.224728 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29492161-ssgd2" Jan 27 16:01:16 crc kubenswrapper[4729]: I0127 16:01:16.597217 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rjlbl_6ce0b622-7220-4c64-ba53-83fe3255d20c/control-plane-machine-set-operator/0.log" Jan 27 16:01:16 crc kubenswrapper[4729]: I0127 16:01:16.791384 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t2v8z_46d3221c-be55-4ab8-95f1-f55bc1eb6596/kube-rbac-proxy/0.log" Jan 27 16:01:16 crc kubenswrapper[4729]: I0127 16:01:16.814569 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t2v8z_46d3221c-be55-4ab8-95f1-f55bc1eb6596/machine-api-operator/0.log" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.005720 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-68w6c_fa223173-c466-46fc-a84d-25e55838018e/cert-manager-controller/0.log" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.032348 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:01:32 crc kubenswrapper[4729]: E0127 16:01:32.032920 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6627c66-78f6-432d-83fb-20578f0e7acb" containerName="keystone-cron" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.032933 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6627c66-78f6-432d-83fb-20578f0e7acb" containerName="keystone-cron" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.033144 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6627c66-78f6-432d-83fb-20578f0e7acb" containerName="keystone-cron" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.035157 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.083792 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.083917 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.084009 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv52\" (UniqueName: \"kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.089525 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.187651 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.187748 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.187821 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fv52\" (UniqueName: \"kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.188206 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.188332 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.216721 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fv52\" (UniqueName: \"kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52\") pod \"redhat-operators-wc72q\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.369453 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.436896 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kgx4d_ccab51b9-7558-4837-b6f3-f7727538fbd5/cert-manager-cainjector/0.log" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.539923 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8pvns_e0c7f80f-6d2c-4806-a4ea-192d40937ea3/cert-manager-webhook/0.log" Jan 27 16:01:32 crc kubenswrapper[4729]: I0127 16:01:32.920750 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:01:32 crc kubenswrapper[4729]: W0127 16:01:32.924194 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b01f55f_44dc_48c2_946a_54248a68a2da.slice/crio-13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480 WatchSource:0}: Error finding container 13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480: Status 404 returned error can't find the container with id 13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480 Jan 27 16:01:33 crc kubenswrapper[4729]: I0127 16:01:33.480920 4729 generic.go:334] "Generic (PLEG): container finished" podID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerID="5c81dc63e99a86f59f4eba4e778db4c0983ab828c64c3dd50ec42cbc3e390abd" exitCode=0 Jan 27 16:01:33 crc kubenswrapper[4729]: I0127 16:01:33.481196 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerDied","Data":"5c81dc63e99a86f59f4eba4e778db4c0983ab828c64c3dd50ec42cbc3e390abd"} Jan 27 16:01:33 crc kubenswrapper[4729]: I0127 16:01:33.481222 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerStarted","Data":"13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480"} Jan 27 16:01:35 crc kubenswrapper[4729]: I0127 16:01:35.509363 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerStarted","Data":"e5ef45707963c0f527cc0d4ee054d8833a51d1de21db7a9fb901df4f22501328"} Jan 27 16:01:42 crc kubenswrapper[4729]: I0127 16:01:42.590753 4729 generic.go:334] "Generic (PLEG): container finished" podID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerID="e5ef45707963c0f527cc0d4ee054d8833a51d1de21db7a9fb901df4f22501328" exitCode=0 Jan 27 16:01:42 crc kubenswrapper[4729]: I0127 16:01:42.590831 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerDied","Data":"e5ef45707963c0f527cc0d4ee054d8833a51d1de21db7a9fb901df4f22501328"} Jan 27 16:01:43 crc kubenswrapper[4729]: I0127 16:01:43.604390 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerStarted","Data":"0d80580900e6bbf047e1319cbdef58b7d3fc719c62730749bc885fe6fcd0b92e"} Jan 27 16:01:43 crc kubenswrapper[4729]: I0127 16:01:43.630533 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wc72q" podStartSLOduration=2.040346524 podStartE2EDuration="11.630506735s" podCreationTimestamp="2026-01-27 16:01:32 +0000 UTC" firstStartedPulling="2026-01-27 16:01:33.484463793 +0000 UTC m=+6980.068654797" lastFinishedPulling="2026-01-27 16:01:43.074624004 +0000 UTC m=+6989.658815008" observedRunningTime="2026-01-27 16:01:43.623493839 +0000 UTC m=+6990.207684863" watchObservedRunningTime="2026-01-27 16:01:43.630506735 +0000 UTC m=+6990.214697749" Jan 27 16:01:49 crc kubenswrapper[4729]: I0127 16:01:49.730365 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-hvk8w_08c43db0-20ec-4a56-bd40-718173782b7b/nmstate-console-plugin/0.log" Jan 27 16:01:49 crc kubenswrapper[4729]: I0127 16:01:49.977935 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bbqst_15d3bff1-3e83-4e70-8118-ca0163c18e48/nmstate-handler/0.log" Jan 27 16:01:50 crc kubenswrapper[4729]: I0127 16:01:50.087996 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7cmw2_9ad69054-310c-4725-8991-d6bd0ace768d/kube-rbac-proxy/0.log" Jan 27 16:01:50 crc kubenswrapper[4729]: I0127 16:01:50.205155 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7cmw2_9ad69054-310c-4725-8991-d6bd0ace768d/nmstate-metrics/0.log" Jan 27 16:01:50 crc kubenswrapper[4729]: I0127 16:01:50.386561 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pg62x_5bbadca7-3b66-4264-8217-5f246163b41e/nmstate-operator/0.log" Jan 27 16:01:50 crc kubenswrapper[4729]: I0127 16:01:50.404106 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t7h46_7b15636d-99af-4c7d-80a8-b179de0709d7/nmstate-webhook/0.log" Jan 27 16:01:52 crc kubenswrapper[4729]: I0127 16:01:52.370095 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:52 crc kubenswrapper[4729]: I0127 16:01:52.370441 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:01:52 crc kubenswrapper[4729]: I0127 16:01:52.656182 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:01:52 crc kubenswrapper[4729]: I0127 16:01:52.656668 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:01:53 crc kubenswrapper[4729]: I0127 16:01:53.419059 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wc72q" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" probeResult="failure" output=< Jan 27 16:01:53 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:01:53 crc kubenswrapper[4729]: > Jan 27 16:02:03 crc kubenswrapper[4729]: I0127 16:02:03.424261 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wc72q" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" probeResult="failure" output=< Jan 27 16:02:03 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:02:03 crc kubenswrapper[4729]: > Jan 27 16:02:05 crc kubenswrapper[4729]: I0127 16:02:05.675838 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/kube-rbac-proxy/0.log" Jan 27 16:02:05 crc kubenswrapper[4729]: I0127 16:02:05.801699 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/manager/0.log" Jan 27 16:02:13 crc kubenswrapper[4729]: I0127 16:02:13.425397 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wc72q" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" probeResult="failure" output=< Jan 27 16:02:13 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:02:13 crc kubenswrapper[4729]: > Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.088531 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jn5b_e5a4281d-dad0-47ba-b48c-cb8a18c57552/prometheus-operator/0.log" Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.315029 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_009d21ee-b5c2-4d71-8a58-fc2643442532/prometheus-operator-admission-webhook/0.log" Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.384341 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_64ad3df0-d3a7-446f-a7d9-6c4194d92071/prometheus-operator-admission-webhook/0.log" Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.576924 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gcmgr_5b2e021c-d93d-45b1-81be-040aa9ab8ada/operator/0.log" Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.640270 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m2dsv_be65005b-48eb-45fe-b1e7-f5b5416fd8f3/observability-ui-dashboards/0.log" Jan 27 16:02:19 crc kubenswrapper[4729]: I0127 16:02:19.818257 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p5mb2_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31/perses-operator/0.log" Jan 27 16:02:22 crc kubenswrapper[4729]: I0127 16:02:22.654987 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:02:22 crc kubenswrapper[4729]: I0127 16:02:22.655339 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:02:23 crc kubenswrapper[4729]: I0127 16:02:23.422012 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wc72q" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" probeResult="failure" output=< Jan 27 16:02:23 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:02:23 crc kubenswrapper[4729]: > Jan 27 16:02:32 crc kubenswrapper[4729]: I0127 16:02:32.425377 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:02:32 crc kubenswrapper[4729]: I0127 16:02:32.480821 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:02:33 crc kubenswrapper[4729]: I0127 16:02:33.314518 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:02:34 crc kubenswrapper[4729]: I0127 16:02:34.149645 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wc72q" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" containerID="cri-o://0d80580900e6bbf047e1319cbdef58b7d3fc719c62730749bc885fe6fcd0b92e" gracePeriod=2 Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.156398 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-6vccq_1b55fd12-cb85-45bc-aad0-b2326d50aed1/cluster-logging-operator/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.170112 4729 generic.go:334] "Generic (PLEG): container finished" podID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerID="0d80580900e6bbf047e1319cbdef58b7d3fc719c62730749bc885fe6fcd0b92e" exitCode=0 Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.170185 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerDied","Data":"0d80580900e6bbf047e1319cbdef58b7d3fc719c62730749bc885fe6fcd0b92e"} Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.170218 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wc72q" event={"ID":"7b01f55f-44dc-48c2-946a-54248a68a2da","Type":"ContainerDied","Data":"13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480"} Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.170235 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13629c48eb2ec8426ba719b25eeace1b0c38769fd7c54fc02926f2e10af96480" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.181170 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.217252 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities\") pod \"7b01f55f-44dc-48c2-946a-54248a68a2da\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.217439 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content\") pod \"7b01f55f-44dc-48c2-946a-54248a68a2da\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.217490 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fv52\" (UniqueName: \"kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52\") pod \"7b01f55f-44dc-48c2-946a-54248a68a2da\" (UID: \"7b01f55f-44dc-48c2-946a-54248a68a2da\") " Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.230282 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities" (OuterVolumeSpecName: "utilities") pod "7b01f55f-44dc-48c2-946a-54248a68a2da" (UID: "7b01f55f-44dc-48c2-946a-54248a68a2da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.244164 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52" (OuterVolumeSpecName: "kube-api-access-4fv52") pod "7b01f55f-44dc-48c2-946a-54248a68a2da" (UID: "7b01f55f-44dc-48c2-946a-54248a68a2da"). InnerVolumeSpecName "kube-api-access-4fv52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.321057 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.321105 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fv52\" (UniqueName: \"kubernetes.io/projected/7b01f55f-44dc-48c2-946a-54248a68a2da-kube-api-access-4fv52\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.393625 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b01f55f-44dc-48c2-946a-54248a68a2da" (UID: "7b01f55f-44dc-48c2-946a-54248a68a2da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.422785 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b01f55f-44dc-48c2-946a-54248a68a2da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.477687 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-p56mj_6bf88053-f822-4735-b8df-cfd1622aad97/collector/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.554695 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_cc44c481-9e30-42f7-883b-209184e04fba/loki-compactor/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.698000 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-c62w8_c05d5a86-89ad-486f-b7dd-404906e2ae3b/loki-distributor/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.800142 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-hds4m_0c13a35c-2b09-4ffa-a6e5-10ba4311f962/gateway/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.866931 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-hds4m_0c13a35c-2b09-4ffa-a6e5-10ba4311f962/opa/0.log" Jan 27 16:02:35 crc kubenswrapper[4729]: I0127 16:02:35.900738 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-jf45j_f7d912e8-1da3-439c-9e59-66145d48e35c/gateway/0.log" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.060710 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-jf45j_f7d912e8-1da3-439c-9e59-66145d48e35c/opa/0.log" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.129968 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_cb7f1542-ef3d-4033-9345-6c504620a57e/loki-index-gateway/0.log" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.178559 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wc72q" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.211456 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.225864 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wc72q"] Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.348532 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_7f768b2c-e000-4052-9e92-82a3bde68514/loki-ingester/0.log" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.362995 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-jk5rc_c529bcb3-c119-47c9-8311-53d2c13f5ddb/loki-querier/0.log" Jan 27 16:02:36 crc kubenswrapper[4729]: I0127 16:02:36.516010 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-g8jsr_0c35c4d5-cfb1-4d36-b502-5a9102ac0886/loki-query-frontend/0.log" Jan 27 16:02:38 crc kubenswrapper[4729]: I0127 16:02:38.066990 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" path="/var/lib/kubelet/pods/7b01f55f-44dc-48c2-946a-54248a68a2da/volumes" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.538334 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wdllv_c39bec12-16a0-40f8-996b-ca212fedccc2/kube-rbac-proxy/0.log" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.670071 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wdllv_c39bec12-16a0-40f8-996b-ca212fedccc2/controller/0.log" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.773364 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.911862 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.964670 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:02:50 crc kubenswrapper[4729]: I0127 16:02:50.977352 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.005766 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.152134 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.180396 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.195659 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.248660 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.452807 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.453468 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.463015 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.464298 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/controller/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.656397 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/frr-metrics/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.706153 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/kube-rbac-proxy-frr/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.722654 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/kube-rbac-proxy/0.log" Jan 27 16:02:51 crc kubenswrapper[4729]: I0127 16:02:51.928690 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/reloader/0.log" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.004365 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-f7nvd_cdef1951-7494-4265-9e1c-9098dab9c112/frr-k8s-webhook-server/0.log" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.201036 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58cc84db45-5jsdn_816e93a1-24ab-4c0b-acd4-439e95ae655d/manager/0.log" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.537001 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b46645688-26b9b_07916c16-27c4-4035-855c-f5ca61af09df/webhook-server/0.log" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.577761 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k2h86_2069295e-9cb7-458a-b4f6-4f569b6e6a8e/kube-rbac-proxy/0.log" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.654958 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.655018 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.655070 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.656028 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:02:52 crc kubenswrapper[4729]: I0127 16:02:52.656097 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75" gracePeriod=600 Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.361698 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k2h86_2069295e-9cb7-458a-b4f6-4f569b6e6a8e/speaker/0.log" Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.383773 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75" exitCode=0 Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.383823 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75"} Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.383855 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36"} Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.383891 4729 scope.go:117] "RemoveContainer" containerID="3351345762d5e356811423b7095b391bde3174e854a335d2a88cb620ef0d6771" Jan 27 16:02:53 crc kubenswrapper[4729]: I0127 16:02:53.874430 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/frr/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.185351 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.434329 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.442951 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.477922 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.697820 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.725041 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.726797 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/extract/0.log" Jan 27 16:03:06 crc kubenswrapper[4729]: I0127 16:03:06.882512 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.068418 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.110637 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.118403 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.242837 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.294435 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.337388 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/extract/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.495665 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.689864 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.704723 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.715739 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.912106 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.921116 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:03:07 crc kubenswrapper[4729]: I0127 16:03:07.947293 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/extract/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.108462 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.322425 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.342581 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.357415 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.591230 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.610160 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.645973 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/extract/0.log" Jan 27 16:03:08 crc kubenswrapper[4729]: I0127 16:03:08.805280 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.062289 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.066052 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.079580 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.466592 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.475612 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.513897 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/extract/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.712285 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.882070 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.899723 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:03:09 crc kubenswrapper[4729]: I0127 16:03:09.921096 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.132130 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.149137 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.465162 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.687387 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.734488 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.781775 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/registry-server/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.813123 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.945564 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:03:10 crc kubenswrapper[4729]: I0127 16:03:10.945709 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.062477 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c26vz_cf7bbeaf-d788-4a89-94f5-af01034515c5/marketplace-operator/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.187658 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.459647 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.473197 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.504216 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.781536 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.795684 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.898409 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/registry-server/0.log" Jan 27 16:03:11 crc kubenswrapper[4729]: I0127 16:03:11.984217 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.077401 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/registry-server/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.172278 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.205600 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.225209 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.377858 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:03:12 crc kubenswrapper[4729]: I0127 16:03:12.421612 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:03:13 crc kubenswrapper[4729]: I0127 16:03:13.379033 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/registry-server/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.085657 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jn5b_e5a4281d-dad0-47ba-b48c-cb8a18c57552/prometheus-operator/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.174611 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_64ad3df0-d3a7-446f-a7d9-6c4194d92071/prometheus-operator-admission-webhook/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.187788 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_009d21ee-b5c2-4d71-8a58-fc2643442532/prometheus-operator-admission-webhook/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.368953 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gcmgr_5b2e021c-d93d-45b1-81be-040aa9ab8ada/operator/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.383275 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p5mb2_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31/perses-operator/0.log" Jan 27 16:03:25 crc kubenswrapper[4729]: I0127 16:03:25.393898 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m2dsv_be65005b-48eb-45fe-b1e7-f5b5416fd8f3/observability-ui-dashboards/0.log" Jan 27 16:03:38 crc kubenswrapper[4729]: I0127 16:03:38.406551 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/kube-rbac-proxy/0.log" Jan 27 16:03:38 crc kubenswrapper[4729]: I0127 16:03:38.460244 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/manager/0.log" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.398427 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:03:54 crc kubenswrapper[4729]: E0127 16:03:54.399544 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="extract-utilities" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.399560 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="extract-utilities" Jan 27 16:03:54 crc kubenswrapper[4729]: E0127 16:03:54.399571 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.399577 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" Jan 27 16:03:54 crc kubenswrapper[4729]: E0127 16:03:54.399604 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="extract-content" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.399613 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="extract-content" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.399894 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b01f55f-44dc-48c2-946a-54248a68a2da" containerName="registry-server" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.402249 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.415903 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.497906 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.498138 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hd9v\" (UniqueName: \"kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.498353 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.601061 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hd9v\" (UniqueName: \"kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.601164 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.601412 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.601723 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.601781 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.629024 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hd9v\" (UniqueName: \"kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v\") pod \"redhat-marketplace-mrm9q\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:54 crc kubenswrapper[4729]: I0127 16:03:54.756344 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:03:55 crc kubenswrapper[4729]: I0127 16:03:55.830824 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:03:56 crc kubenswrapper[4729]: I0127 16:03:56.072225 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerStarted","Data":"0fafe8408c1865fbec9d4855d202df8811358b3edaf2cab2bff06ee4d531defa"} Jan 27 16:03:57 crc kubenswrapper[4729]: I0127 16:03:57.085917 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerID="7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb" exitCode=0 Jan 27 16:03:57 crc kubenswrapper[4729]: I0127 16:03:57.085983 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerDied","Data":"7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb"} Jan 27 16:03:57 crc kubenswrapper[4729]: I0127 16:03:57.089119 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:03:59 crc kubenswrapper[4729]: I0127 16:03:59.115583 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerStarted","Data":"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a"} Jan 27 16:04:01 crc kubenswrapper[4729]: I0127 16:04:01.142031 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerID="8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a" exitCode=0 Jan 27 16:04:01 crc kubenswrapper[4729]: I0127 16:04:01.142107 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerDied","Data":"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a"} Jan 27 16:04:02 crc kubenswrapper[4729]: I0127 16:04:02.161511 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerStarted","Data":"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19"} Jan 27 16:04:02 crc kubenswrapper[4729]: I0127 16:04:02.206606 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mrm9q" podStartSLOduration=3.502978997 podStartE2EDuration="8.206585537s" podCreationTimestamp="2026-01-27 16:03:54 +0000 UTC" firstStartedPulling="2026-01-27 16:03:57.088467627 +0000 UTC m=+7123.672658631" lastFinishedPulling="2026-01-27 16:04:01.792074157 +0000 UTC m=+7128.376265171" observedRunningTime="2026-01-27 16:04:02.183630084 +0000 UTC m=+7128.767821098" watchObservedRunningTime="2026-01-27 16:04:02.206585537 +0000 UTC m=+7128.790776541" Jan 27 16:04:04 crc kubenswrapper[4729]: I0127 16:04:04.757012 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:04 crc kubenswrapper[4729]: I0127 16:04:04.757682 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:05 crc kubenswrapper[4729]: I0127 16:04:05.830831 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mrm9q" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" probeResult="failure" output=< Jan 27 16:04:05 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:04:05 crc kubenswrapper[4729]: > Jan 27 16:04:15 crc kubenswrapper[4729]: I0127 16:04:15.814083 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mrm9q" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" probeResult="failure" output=< Jan 27 16:04:15 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:04:15 crc kubenswrapper[4729]: > Jan 27 16:04:17 crc kubenswrapper[4729]: I0127 16:04:17.046299 4729 scope.go:117] "RemoveContainer" containerID="2e606a8947a9502e548cc166e6cd660647da6b179be43b592ab4fdda32a7c0d8" Jan 27 16:04:24 crc kubenswrapper[4729]: I0127 16:04:24.807666 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:24 crc kubenswrapper[4729]: I0127 16:04:24.864819 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:25 crc kubenswrapper[4729]: I0127 16:04:25.597122 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:04:26 crc kubenswrapper[4729]: I0127 16:04:26.452578 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mrm9q" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" containerID="cri-o://6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19" gracePeriod=2 Jan 27 16:04:26 crc kubenswrapper[4729]: E0127 16:04:26.780352 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f517ad_35c1_4d4a_af7d_290a63e2dd01.slice/crio-6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f517ad_35c1_4d4a_af7d_290a63e2dd01.slice/crio-conmon-6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.132162 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.249902 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hd9v\" (UniqueName: \"kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v\") pod \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.250032 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content\") pod \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.250133 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities\") pod \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\" (UID: \"a9f517ad-35c1-4d4a-af7d-290a63e2dd01\") " Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.256187 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities" (OuterVolumeSpecName: "utilities") pod "a9f517ad-35c1-4d4a-af7d-290a63e2dd01" (UID: "a9f517ad-35c1-4d4a-af7d-290a63e2dd01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.283548 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v" (OuterVolumeSpecName: "kube-api-access-2hd9v") pod "a9f517ad-35c1-4d4a-af7d-290a63e2dd01" (UID: "a9f517ad-35c1-4d4a-af7d-290a63e2dd01"). InnerVolumeSpecName "kube-api-access-2hd9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.334057 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f517ad-35c1-4d4a-af7d-290a63e2dd01" (UID: "a9f517ad-35c1-4d4a-af7d-290a63e2dd01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.353669 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hd9v\" (UniqueName: \"kubernetes.io/projected/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-kube-api-access-2hd9v\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.353707 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.353716 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f517ad-35c1-4d4a-af7d-290a63e2dd01-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.465381 4729 generic.go:334] "Generic (PLEG): container finished" podID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerID="6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19" exitCode=0 Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.465421 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerDied","Data":"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19"} Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.465457 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mrm9q" event={"ID":"a9f517ad-35c1-4d4a-af7d-290a63e2dd01","Type":"ContainerDied","Data":"0fafe8408c1865fbec9d4855d202df8811358b3edaf2cab2bff06ee4d531defa"} Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.465481 4729 scope.go:117] "RemoveContainer" containerID="6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.465694 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mrm9q" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.516051 4729 scope.go:117] "RemoveContainer" containerID="8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.528239 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.541430 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mrm9q"] Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.554026 4729 scope.go:117] "RemoveContainer" containerID="7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.600849 4729 scope.go:117] "RemoveContainer" containerID="6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19" Jan 27 16:04:27 crc kubenswrapper[4729]: E0127 16:04:27.605777 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19\": container with ID starting with 6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19 not found: ID does not exist" containerID="6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.605846 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19"} err="failed to get container status \"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19\": rpc error: code = NotFound desc = could not find container \"6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19\": container with ID starting with 6c1e5291bcaa6a1abe8158a4b6a1ae68bd50c3b6c9c2fa337eb75b8712fbdf19 not found: ID does not exist" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.605890 4729 scope.go:117] "RemoveContainer" containerID="8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a" Jan 27 16:04:27 crc kubenswrapper[4729]: E0127 16:04:27.606342 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a\": container with ID starting with 8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a not found: ID does not exist" containerID="8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.606392 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a"} err="failed to get container status \"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a\": rpc error: code = NotFound desc = could not find container \"8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a\": container with ID starting with 8c476032f0536c678cc048a4b44be1356439d1268737baa4564af9ac864b762a not found: ID does not exist" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.610393 4729 scope.go:117] "RemoveContainer" containerID="7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb" Jan 27 16:04:27 crc kubenswrapper[4729]: E0127 16:04:27.610834 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb\": container with ID starting with 7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb not found: ID does not exist" containerID="7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb" Jan 27 16:04:27 crc kubenswrapper[4729]: I0127 16:04:27.610883 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb"} err="failed to get container status \"7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb\": rpc error: code = NotFound desc = could not find container \"7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb\": container with ID starting with 7955c5570a9b8a4bf0345dd39f170b10ca2015700c0059e0e8305cb41acc87fb not found: ID does not exist" Jan 27 16:04:28 crc kubenswrapper[4729]: I0127 16:04:28.065769 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" path="/var/lib/kubelet/pods/a9f517ad-35c1-4d4a-af7d-290a63e2dd01/volumes" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.006568 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:33 crc kubenswrapper[4729]: E0127 16:04:33.007614 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="extract-utilities" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.007629 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="extract-utilities" Jan 27 16:04:33 crc kubenswrapper[4729]: E0127 16:04:33.007637 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.007643 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" Jan 27 16:04:33 crc kubenswrapper[4729]: E0127 16:04:33.007658 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="extract-content" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.007664 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="extract-content" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.007914 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f517ad-35c1-4d4a-af7d-290a63e2dd01" containerName="registry-server" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.009603 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.037120 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.098557 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.098704 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8m9\" (UniqueName: \"kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.099510 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.203296 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.203807 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk8m9\" (UniqueName: \"kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.203815 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.203981 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.204305 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.231370 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk8m9\" (UniqueName: \"kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9\") pod \"certified-operators-lrz78\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:33 crc kubenswrapper[4729]: I0127 16:04:33.329321 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:34 crc kubenswrapper[4729]: I0127 16:04:34.013365 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:34 crc kubenswrapper[4729]: I0127 16:04:34.601854 4729 generic.go:334] "Generic (PLEG): container finished" podID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerID="c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec" exitCode=0 Jan 27 16:04:34 crc kubenswrapper[4729]: I0127 16:04:34.601931 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerDied","Data":"c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec"} Jan 27 16:04:34 crc kubenswrapper[4729]: I0127 16:04:34.602220 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerStarted","Data":"0f6675d3b191c2c53224fccdf055dc83c17ccffb0f6121f4ff4dfc3f56015685"} Jan 27 16:04:37 crc kubenswrapper[4729]: I0127 16:04:37.633019 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerStarted","Data":"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c"} Jan 27 16:04:39 crc kubenswrapper[4729]: I0127 16:04:39.673253 4729 generic.go:334] "Generic (PLEG): container finished" podID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerID="0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c" exitCode=0 Jan 27 16:04:39 crc kubenswrapper[4729]: I0127 16:04:39.673814 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerDied","Data":"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c"} Jan 27 16:04:41 crc kubenswrapper[4729]: I0127 16:04:41.696556 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerStarted","Data":"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296"} Jan 27 16:04:41 crc kubenswrapper[4729]: I0127 16:04:41.722515 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrz78" podStartSLOduration=3.059936694 podStartE2EDuration="9.722491279s" podCreationTimestamp="2026-01-27 16:04:32 +0000 UTC" firstStartedPulling="2026-01-27 16:04:34.603810993 +0000 UTC m=+7161.188001997" lastFinishedPulling="2026-01-27 16:04:41.266365578 +0000 UTC m=+7167.850556582" observedRunningTime="2026-01-27 16:04:41.715928683 +0000 UTC m=+7168.300119677" watchObservedRunningTime="2026-01-27 16:04:41.722491279 +0000 UTC m=+7168.306682313" Jan 27 16:04:43 crc kubenswrapper[4729]: I0127 16:04:43.329555 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:43 crc kubenswrapper[4729]: I0127 16:04:43.330189 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:43 crc kubenswrapper[4729]: I0127 16:04:43.383751 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:53 crc kubenswrapper[4729]: I0127 16:04:53.380630 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:57 crc kubenswrapper[4729]: I0127 16:04:57.794899 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:57 crc kubenswrapper[4729]: I0127 16:04:57.795602 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrz78" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="registry-server" containerID="cri-o://4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296" gracePeriod=2 Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.370977 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.479884 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk8m9\" (UniqueName: \"kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9\") pod \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.479990 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities\") pod \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.480023 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content\") pod \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\" (UID: \"4782888f-4c2d-449c-aec3-bcfbe798a6d2\") " Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.481607 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities" (OuterVolumeSpecName: "utilities") pod "4782888f-4c2d-449c-aec3-bcfbe798a6d2" (UID: "4782888f-4c2d-449c-aec3-bcfbe798a6d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.500150 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9" (OuterVolumeSpecName: "kube-api-access-sk8m9") pod "4782888f-4c2d-449c-aec3-bcfbe798a6d2" (UID: "4782888f-4c2d-449c-aec3-bcfbe798a6d2"). InnerVolumeSpecName "kube-api-access-sk8m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.544649 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4782888f-4c2d-449c-aec3-bcfbe798a6d2" (UID: "4782888f-4c2d-449c-aec3-bcfbe798a6d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.582832 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.582914 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4782888f-4c2d-449c-aec3-bcfbe798a6d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.582928 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk8m9\" (UniqueName: \"kubernetes.io/projected/4782888f-4c2d-449c-aec3-bcfbe798a6d2-kube-api-access-sk8m9\") on node \"crc\" DevicePath \"\"" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.893312 4729 generic.go:334] "Generic (PLEG): container finished" podID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerID="4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296" exitCode=0 Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.893705 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerDied","Data":"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296"} Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.893748 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrz78" event={"ID":"4782888f-4c2d-449c-aec3-bcfbe798a6d2","Type":"ContainerDied","Data":"0f6675d3b191c2c53224fccdf055dc83c17ccffb0f6121f4ff4dfc3f56015685"} Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.893794 4729 scope.go:117] "RemoveContainer" containerID="4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.893847 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrz78" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.945584 4729 scope.go:117] "RemoveContainer" containerID="0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c" Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.960004 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.970585 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrz78"] Jan 27 16:04:58 crc kubenswrapper[4729]: I0127 16:04:58.977667 4729 scope.go:117] "RemoveContainer" containerID="c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.035833 4729 scope.go:117] "RemoveContainer" containerID="4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296" Jan 27 16:04:59 crc kubenswrapper[4729]: E0127 16:04:59.036415 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296\": container with ID starting with 4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296 not found: ID does not exist" containerID="4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.036475 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296"} err="failed to get container status \"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296\": rpc error: code = NotFound desc = could not find container \"4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296\": container with ID starting with 4a843263da4f710f28f91ec91553aa550e1fc25f583e16f0872f1e7c25c66296 not found: ID does not exist" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.036510 4729 scope.go:117] "RemoveContainer" containerID="0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c" Jan 27 16:04:59 crc kubenswrapper[4729]: E0127 16:04:59.036958 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c\": container with ID starting with 0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c not found: ID does not exist" containerID="0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.037005 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c"} err="failed to get container status \"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c\": rpc error: code = NotFound desc = could not find container \"0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c\": container with ID starting with 0ae238de6906e5a45db152a3034db770fb380e843ee89c708e70d8d51ced9b5c not found: ID does not exist" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.037034 4729 scope.go:117] "RemoveContainer" containerID="c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec" Jan 27 16:04:59 crc kubenswrapper[4729]: E0127 16:04:59.037634 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec\": container with ID starting with c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec not found: ID does not exist" containerID="c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec" Jan 27 16:04:59 crc kubenswrapper[4729]: I0127 16:04:59.037666 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec"} err="failed to get container status \"c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec\": rpc error: code = NotFound desc = could not find container \"c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec\": container with ID starting with c7632131bbb528cffc21afb7b3f16d5bda0c5940185cefcaeeebae61e8f27aec not found: ID does not exist" Jan 27 16:05:00 crc kubenswrapper[4729]: I0127 16:05:00.065332 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" path="/var/lib/kubelet/pods/4782888f-4c2d-449c-aec3-bcfbe798a6d2/volumes" Jan 27 16:05:17 crc kubenswrapper[4729]: I0127 16:05:17.270384 4729 scope.go:117] "RemoveContainer" containerID="04347b3ad967887b8398f8dd042c3acc42044c6b0e1d029e2583bb00d3f0bfb4" Jan 27 16:05:22 crc kubenswrapper[4729]: I0127 16:05:22.655134 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:05:22 crc kubenswrapper[4729]: I0127 16:05:22.655817 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:05:52 crc kubenswrapper[4729]: I0127 16:05:52.654652 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:05:52 crc kubenswrapper[4729]: I0127 16:05:52.655195 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:06:00 crc kubenswrapper[4729]: I0127 16:06:00.733690 4729 generic.go:334] "Generic (PLEG): container finished" podID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerID="37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719" exitCode=0 Jan 27 16:06:00 crc kubenswrapper[4729]: I0127 16:06:00.733777 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7js5l/must-gather-df56w" event={"ID":"a4c065cc-3ae3-4d70-ba51-4888709048c9","Type":"ContainerDied","Data":"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719"} Jan 27 16:06:00 crc kubenswrapper[4729]: I0127 16:06:00.736061 4729 scope.go:117] "RemoveContainer" containerID="37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719" Jan 27 16:06:00 crc kubenswrapper[4729]: I0127 16:06:00.849001 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7js5l_must-gather-df56w_a4c065cc-3ae3-4d70-ba51-4888709048c9/gather/0.log" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.094242 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-7js5l/must-gather-df56w"] Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.096050 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-7js5l/must-gather-df56w" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="copy" containerID="cri-o://1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4" gracePeriod=2 Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.108474 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-7js5l/must-gather-df56w"] Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.621611 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7js5l_must-gather-df56w_a4c065cc-3ae3-4d70-ba51-4888709048c9/copy/0.log" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.622665 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.775573 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output\") pod \"a4c065cc-3ae3-4d70-ba51-4888709048c9\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.776025 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j55bf\" (UniqueName: \"kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf\") pod \"a4c065cc-3ae3-4d70-ba51-4888709048c9\" (UID: \"a4c065cc-3ae3-4d70-ba51-4888709048c9\") " Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.783654 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf" (OuterVolumeSpecName: "kube-api-access-j55bf") pod "a4c065cc-3ae3-4d70-ba51-4888709048c9" (UID: "a4c065cc-3ae3-4d70-ba51-4888709048c9"). InnerVolumeSpecName "kube-api-access-j55bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.879092 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j55bf\" (UniqueName: \"kubernetes.io/projected/a4c065cc-3ae3-4d70-ba51-4888709048c9-kube-api-access-j55bf\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.880984 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-7js5l_must-gather-df56w_a4c065cc-3ae3-4d70-ba51-4888709048c9/copy/0.log" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.881514 4729 generic.go:334] "Generic (PLEG): container finished" podID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerID="1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4" exitCode=143 Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.881581 4729 scope.go:117] "RemoveContainer" containerID="1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.881830 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7js5l/must-gather-df56w" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.908537 4729 scope.go:117] "RemoveContainer" containerID="37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.955063 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a4c065cc-3ae3-4d70-ba51-4888709048c9" (UID: "a4c065cc-3ae3-4d70-ba51-4888709048c9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:06:12 crc kubenswrapper[4729]: I0127 16:06:12.981254 4729 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a4c065cc-3ae3-4d70-ba51-4888709048c9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 16:06:13 crc kubenswrapper[4729]: I0127 16:06:13.091743 4729 scope.go:117] "RemoveContainer" containerID="1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4" Jan 27 16:06:13 crc kubenswrapper[4729]: E0127 16:06:13.092361 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4\": container with ID starting with 1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4 not found: ID does not exist" containerID="1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4" Jan 27 16:06:13 crc kubenswrapper[4729]: I0127 16:06:13.092415 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4"} err="failed to get container status \"1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4\": rpc error: code = NotFound desc = could not find container \"1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4\": container with ID starting with 1b7b799ab82dc5e0da1affcfef49e3ae2363929799a6432fa6e9f2e7fac319d4 not found: ID does not exist" Jan 27 16:06:13 crc kubenswrapper[4729]: I0127 16:06:13.092448 4729 scope.go:117] "RemoveContainer" containerID="37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719" Jan 27 16:06:13 crc kubenswrapper[4729]: E0127 16:06:13.092923 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719\": container with ID starting with 37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719 not found: ID does not exist" containerID="37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719" Jan 27 16:06:13 crc kubenswrapper[4729]: I0127 16:06:13.092975 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719"} err="failed to get container status \"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719\": rpc error: code = NotFound desc = could not find container \"37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719\": container with ID starting with 37f65e2f1c0701467f2701bdf4fa873ac4182a4343d3a83ae9f628418fe70719 not found: ID does not exist" Jan 27 16:06:14 crc kubenswrapper[4729]: I0127 16:06:14.065697 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" path="/var/lib/kubelet/pods/a4c065cc-3ae3-4d70-ba51-4888709048c9/volumes" Jan 27 16:06:22 crc kubenswrapper[4729]: I0127 16:06:22.655262 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:06:22 crc kubenswrapper[4729]: I0127 16:06:22.655763 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:06:22 crc kubenswrapper[4729]: I0127 16:06:22.655808 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 16:06:22 crc kubenswrapper[4729]: I0127 16:06:22.656702 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:06:22 crc kubenswrapper[4729]: I0127 16:06:22.656764 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" gracePeriod=600 Jan 27 16:06:22 crc kubenswrapper[4729]: E0127 16:06:22.800777 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:06:23 crc kubenswrapper[4729]: I0127 16:06:23.005031 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" exitCode=0 Jan 27 16:06:23 crc kubenswrapper[4729]: I0127 16:06:23.005085 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36"} Jan 27 16:06:23 crc kubenswrapper[4729]: I0127 16:06:23.005129 4729 scope.go:117] "RemoveContainer" containerID="4f5f41da375cb91c13dff4f50aab3dfe01a99162c052c53e3ac743bef2732a75" Jan 27 16:06:23 crc kubenswrapper[4729]: I0127 16:06:23.006058 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:06:23 crc kubenswrapper[4729]: E0127 16:06:23.006432 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:06:35 crc kubenswrapper[4729]: I0127 16:06:35.051634 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:06:35 crc kubenswrapper[4729]: E0127 16:06:35.052583 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:06:50 crc kubenswrapper[4729]: I0127 16:06:50.052319 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:06:50 crc kubenswrapper[4729]: E0127 16:06:50.054290 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:07:04 crc kubenswrapper[4729]: I0127 16:07:04.061862 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:07:04 crc kubenswrapper[4729]: E0127 16:07:04.062843 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:07:19 crc kubenswrapper[4729]: I0127 16:07:19.052335 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:07:19 crc kubenswrapper[4729]: E0127 16:07:19.053490 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:07:32 crc kubenswrapper[4729]: I0127 16:07:32.051786 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:07:32 crc kubenswrapper[4729]: E0127 16:07:32.052665 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:07:41 crc kubenswrapper[4729]: I0127 16:07:41.626525 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-849d9cdd4f-w5qzz" podUID="39caa2da-8dac-4581-8e89-2b7f3b013b8c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.645902 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:07:43 crc kubenswrapper[4729]: E0127 16:07:43.647084 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="extract-utilities" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647099 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="extract-utilities" Jan 27 16:07:43 crc kubenswrapper[4729]: E0127 16:07:43.647125 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="copy" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647135 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="copy" Jan 27 16:07:43 crc kubenswrapper[4729]: E0127 16:07:43.647145 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="extract-content" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647152 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="extract-content" Jan 27 16:07:43 crc kubenswrapper[4729]: E0127 16:07:43.647187 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="registry-server" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647195 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="registry-server" Jan 27 16:07:43 crc kubenswrapper[4729]: E0127 16:07:43.647213 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="gather" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647219 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="gather" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647444 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="copy" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647457 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4782888f-4c2d-449c-aec3-bcfbe798a6d2" containerName="registry-server" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.647477 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c065cc-3ae3-4d70-ba51-4888709048c9" containerName="gather" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.650178 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.665197 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.801688 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lm42\" (UniqueName: \"kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.802237 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.802517 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.906357 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lm42\" (UniqueName: \"kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.906571 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.906632 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.907157 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.907204 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.931102 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lm42\" (UniqueName: \"kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42\") pod \"community-operators-mkvd8\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:43 crc kubenswrapper[4729]: I0127 16:07:43.984751 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:44 crc kubenswrapper[4729]: I0127 16:07:44.662830 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:07:44 crc kubenswrapper[4729]: I0127 16:07:44.882479 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerStarted","Data":"8d73782d801b857824c538411c9b645d944a0c0117e9cb064f6b92e299485edd"} Jan 27 16:07:45 crc kubenswrapper[4729]: I0127 16:07:45.895630 4729 generic.go:334] "Generic (PLEG): container finished" podID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerID="3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c" exitCode=0 Jan 27 16:07:45 crc kubenswrapper[4729]: I0127 16:07:45.895976 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerDied","Data":"3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c"} Jan 27 16:07:46 crc kubenswrapper[4729]: I0127 16:07:46.052212 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:07:46 crc kubenswrapper[4729]: E0127 16:07:46.052666 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:07:47 crc kubenswrapper[4729]: I0127 16:07:47.924930 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerStarted","Data":"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b"} Jan 27 16:07:49 crc kubenswrapper[4729]: I0127 16:07:49.947010 4729 generic.go:334] "Generic (PLEG): container finished" podID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerID="77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b" exitCode=0 Jan 27 16:07:49 crc kubenswrapper[4729]: I0127 16:07:49.947068 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerDied","Data":"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b"} Jan 27 16:07:50 crc kubenswrapper[4729]: I0127 16:07:50.962064 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerStarted","Data":"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7"} Jan 27 16:07:50 crc kubenswrapper[4729]: I0127 16:07:50.987669 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkvd8" podStartSLOduration=3.566907423 podStartE2EDuration="7.987647049s" podCreationTimestamp="2026-01-27 16:07:43 +0000 UTC" firstStartedPulling="2026-01-27 16:07:45.898555455 +0000 UTC m=+7352.482746459" lastFinishedPulling="2026-01-27 16:07:50.319295081 +0000 UTC m=+7356.903486085" observedRunningTime="2026-01-27 16:07:50.983327374 +0000 UTC m=+7357.567518378" watchObservedRunningTime="2026-01-27 16:07:50.987647049 +0000 UTC m=+7357.571838053" Jan 27 16:07:53 crc kubenswrapper[4729]: I0127 16:07:53.985449 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:53 crc kubenswrapper[4729]: I0127 16:07:53.986110 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:07:55 crc kubenswrapper[4729]: I0127 16:07:55.034199 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-mkvd8" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="registry-server" probeResult="failure" output=< Jan 27 16:07:55 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:07:55 crc kubenswrapper[4729]: > Jan 27 16:08:01 crc kubenswrapper[4729]: I0127 16:08:01.051793 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:08:01 crc kubenswrapper[4729]: E0127 16:08:01.052824 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:08:04 crc kubenswrapper[4729]: I0127 16:08:04.073454 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:08:04 crc kubenswrapper[4729]: I0127 16:08:04.122871 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:08:04 crc kubenswrapper[4729]: I0127 16:08:04.311212 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.115983 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkvd8" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="registry-server" containerID="cri-o://f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7" gracePeriod=2 Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.649489 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.746644 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities\") pod \"d04adb1f-f78d-4723-b80c-30f5c2bed044\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.746744 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lm42\" (UniqueName: \"kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42\") pod \"d04adb1f-f78d-4723-b80c-30f5c2bed044\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.746987 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content\") pod \"d04adb1f-f78d-4723-b80c-30f5c2bed044\" (UID: \"d04adb1f-f78d-4723-b80c-30f5c2bed044\") " Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.747323 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities" (OuterVolumeSpecName: "utilities") pod "d04adb1f-f78d-4723-b80c-30f5c2bed044" (UID: "d04adb1f-f78d-4723-b80c-30f5c2bed044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.748328 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.752584 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42" (OuterVolumeSpecName: "kube-api-access-4lm42") pod "d04adb1f-f78d-4723-b80c-30f5c2bed044" (UID: "d04adb1f-f78d-4723-b80c-30f5c2bed044"). InnerVolumeSpecName "kube-api-access-4lm42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.807136 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d04adb1f-f78d-4723-b80c-30f5c2bed044" (UID: "d04adb1f-f78d-4723-b80c-30f5c2bed044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.850943 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04adb1f-f78d-4723-b80c-30f5c2bed044-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:05 crc kubenswrapper[4729]: I0127 16:08:05.850985 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lm42\" (UniqueName: \"kubernetes.io/projected/d04adb1f-f78d-4723-b80c-30f5c2bed044-kube-api-access-4lm42\") on node \"crc\" DevicePath \"\"" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.133028 4729 generic.go:334] "Generic (PLEG): container finished" podID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerID="f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7" exitCode=0 Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.133069 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerDied","Data":"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7"} Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.133096 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkvd8" event={"ID":"d04adb1f-f78d-4723-b80c-30f5c2bed044","Type":"ContainerDied","Data":"8d73782d801b857824c538411c9b645d944a0c0117e9cb064f6b92e299485edd"} Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.133115 4729 scope.go:117] "RemoveContainer" containerID="f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.133194 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkvd8" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.163062 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.163431 4729 scope.go:117] "RemoveContainer" containerID="77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.174807 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkvd8"] Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.193827 4729 scope.go:117] "RemoveContainer" containerID="3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.268677 4729 scope.go:117] "RemoveContainer" containerID="f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7" Jan 27 16:08:06 crc kubenswrapper[4729]: E0127 16:08:06.269668 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7\": container with ID starting with f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7 not found: ID does not exist" containerID="f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.269729 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7"} err="failed to get container status \"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7\": rpc error: code = NotFound desc = could not find container \"f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7\": container with ID starting with f0eb1c9f46fef4bed3a5641aa1ac4fe5793e5bad82a7c2a4a7a89effc4e2cec7 not found: ID does not exist" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.269762 4729 scope.go:117] "RemoveContainer" containerID="77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b" Jan 27 16:08:06 crc kubenswrapper[4729]: E0127 16:08:06.270213 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b\": container with ID starting with 77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b not found: ID does not exist" containerID="77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.270258 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b"} err="failed to get container status \"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b\": rpc error: code = NotFound desc = could not find container \"77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b\": container with ID starting with 77f96ce359e4d2a8d0aa17dd2530290ed5b5fa6ba414e388f1902df546f21e0b not found: ID does not exist" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.270288 4729 scope.go:117] "RemoveContainer" containerID="3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c" Jan 27 16:08:06 crc kubenswrapper[4729]: E0127 16:08:06.270663 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c\": container with ID starting with 3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c not found: ID does not exist" containerID="3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c" Jan 27 16:08:06 crc kubenswrapper[4729]: I0127 16:08:06.270705 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c"} err="failed to get container status \"3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c\": rpc error: code = NotFound desc = could not find container \"3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c\": container with ID starting with 3ff125db5e0259e91c174930897aaf445a5480fabc1b74566811e0173511a07c not found: ID does not exist" Jan 27 16:08:08 crc kubenswrapper[4729]: I0127 16:08:08.064425 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" path="/var/lib/kubelet/pods/d04adb1f-f78d-4723-b80c-30f5c2bed044/volumes" Jan 27 16:08:15 crc kubenswrapper[4729]: I0127 16:08:15.051588 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:08:15 crc kubenswrapper[4729]: E0127 16:08:15.052349 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:08:17 crc kubenswrapper[4729]: I0127 16:08:17.429584 4729 scope.go:117] "RemoveContainer" containerID="e5ef45707963c0f527cc0d4ee054d8833a51d1de21db7a9fb901df4f22501328" Jan 27 16:08:17 crc kubenswrapper[4729]: I0127 16:08:17.456006 4729 scope.go:117] "RemoveContainer" containerID="5c81dc63e99a86f59f4eba4e778db4c0983ab828c64c3dd50ec42cbc3e390abd" Jan 27 16:08:17 crc kubenswrapper[4729]: I0127 16:08:17.514980 4729 scope.go:117] "RemoveContainer" containerID="0d80580900e6bbf047e1319cbdef58b7d3fc719c62730749bc885fe6fcd0b92e" Jan 27 16:08:30 crc kubenswrapper[4729]: I0127 16:08:30.051575 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:08:30 crc kubenswrapper[4729]: E0127 16:08:30.052685 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:08:41 crc kubenswrapper[4729]: I0127 16:08:41.051514 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:08:41 crc kubenswrapper[4729]: E0127 16:08:41.052329 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:08:55 crc kubenswrapper[4729]: I0127 16:08:55.051631 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:08:55 crc kubenswrapper[4729]: E0127 16:08:55.052867 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:09:09 crc kubenswrapper[4729]: I0127 16:09:09.051229 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:09:09 crc kubenswrapper[4729]: E0127 16:09:09.052150 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:09:21 crc kubenswrapper[4729]: I0127 16:09:21.052131 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:09:21 crc kubenswrapper[4729]: E0127 16:09:21.053132 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:09:35 crc kubenswrapper[4729]: I0127 16:09:35.050705 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:09:35 crc kubenswrapper[4729]: E0127 16:09:35.051299 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.401789 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mjwht/must-gather-67bkt"] Jan 27 16:09:41 crc kubenswrapper[4729]: E0127 16:09:41.403013 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="extract-utilities" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.403033 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="extract-utilities" Jan 27 16:09:41 crc kubenswrapper[4729]: E0127 16:09:41.403086 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="registry-server" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.403095 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="registry-server" Jan 27 16:09:41 crc kubenswrapper[4729]: E0127 16:09:41.403131 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="extract-content" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.403139 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="extract-content" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.403449 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04adb1f-f78d-4723-b80c-30f5c2bed044" containerName="registry-server" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.406217 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.420425 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mjwht"/"kube-root-ca.crt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.420432 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mjwht"/"openshift-service-ca.crt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.446970 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mjwht/must-gather-67bkt"] Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.481836 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9c7r\" (UniqueName: \"kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.482023 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.591415 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9c7r\" (UniqueName: \"kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.591703 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.601802 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.625333 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9c7r\" (UniqueName: \"kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r\") pod \"must-gather-67bkt\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:41 crc kubenswrapper[4729]: I0127 16:09:41.729806 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:09:42 crc kubenswrapper[4729]: I0127 16:09:42.233787 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mjwht/must-gather-67bkt"] Jan 27 16:09:42 crc kubenswrapper[4729]: W0127 16:09:42.244612 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11e580c6_f2da_429f_894a_8d32d7ad242e.slice/crio-5178bcd8667be902a4bb56b13c0f96871f268f4fddc252fa80d2be6b3145ce42 WatchSource:0}: Error finding container 5178bcd8667be902a4bb56b13c0f96871f268f4fddc252fa80d2be6b3145ce42: Status 404 returned error can't find the container with id 5178bcd8667be902a4bb56b13c0f96871f268f4fddc252fa80d2be6b3145ce42 Jan 27 16:09:43 crc kubenswrapper[4729]: I0127 16:09:43.246349 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/must-gather-67bkt" event={"ID":"11e580c6-f2da-429f-894a-8d32d7ad242e","Type":"ContainerStarted","Data":"7f3b8f08bdf831793a6597fbc77e9124fa6ad5802c485c33092e376f7b4d94f9"} Jan 27 16:09:43 crc kubenswrapper[4729]: I0127 16:09:43.246609 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/must-gather-67bkt" event={"ID":"11e580c6-f2da-429f-894a-8d32d7ad242e","Type":"ContainerStarted","Data":"f1cf291732ad48da49276807961aa9bc090584454dc0d2853e20bf76b4bef726"} Jan 27 16:09:43 crc kubenswrapper[4729]: I0127 16:09:43.246618 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/must-gather-67bkt" event={"ID":"11e580c6-f2da-429f-894a-8d32d7ad242e","Type":"ContainerStarted","Data":"5178bcd8667be902a4bb56b13c0f96871f268f4fddc252fa80d2be6b3145ce42"} Jan 27 16:09:43 crc kubenswrapper[4729]: I0127 16:09:43.274546 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mjwht/must-gather-67bkt" podStartSLOduration=2.274524118 podStartE2EDuration="2.274524118s" podCreationTimestamp="2026-01-27 16:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:09:43.262505477 +0000 UTC m=+7469.846696481" watchObservedRunningTime="2026-01-27 16:09:43.274524118 +0000 UTC m=+7469.858715122" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:46.999613 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mjwht/crc-debug-kcpcn"] Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.002333 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.013990 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mjwht"/"default-dockercfg-lsw5z" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.038806 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfqdc\" (UniqueName: \"kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.039239 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.051688 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:09:47 crc kubenswrapper[4729]: E0127 16:09:47.052012 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.141044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfqdc\" (UniqueName: \"kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.141546 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.143859 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.162547 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfqdc\" (UniqueName: \"kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc\") pod \"crc-debug-kcpcn\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:47 crc kubenswrapper[4729]: I0127 16:09:47.347441 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:09:48 crc kubenswrapper[4729]: I0127 16:09:48.305429 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" event={"ID":"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3","Type":"ContainerStarted","Data":"7bc33b7c2b1946e110059cbe4954218f9184f3ca753a627372885a5ab41bc662"} Jan 27 16:09:48 crc kubenswrapper[4729]: I0127 16:09:48.306224 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" event={"ID":"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3","Type":"ContainerStarted","Data":"0b63596f0f6ceee072dc74f8bf2f23c41336b8f85d59f2075b42e4dc93fe2b60"} Jan 27 16:09:48 crc kubenswrapper[4729]: I0127 16:09:48.332570 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" podStartSLOduration=2.3325478840000002 podStartE2EDuration="2.332547884s" podCreationTimestamp="2026-01-27 16:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:09:48.325262989 +0000 UTC m=+7474.909453993" watchObservedRunningTime="2026-01-27 16:09:48.332547884 +0000 UTC m=+7474.916738888" Jan 27 16:09:58 crc kubenswrapper[4729]: I0127 16:09:58.052078 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:09:58 crc kubenswrapper[4729]: E0127 16:09:58.052902 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:10:11 crc kubenswrapper[4729]: I0127 16:10:11.050942 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:10:11 crc kubenswrapper[4729]: E0127 16:10:11.052100 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:10:23 crc kubenswrapper[4729]: I0127 16:10:23.051383 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:10:23 crc kubenswrapper[4729]: E0127 16:10:23.052394 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:10:37 crc kubenswrapper[4729]: I0127 16:10:37.867478 4729 generic.go:334] "Generic (PLEG): container finished" podID="4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" containerID="7bc33b7c2b1946e110059cbe4954218f9184f3ca753a627372885a5ab41bc662" exitCode=0 Jan 27 16:10:37 crc kubenswrapper[4729]: I0127 16:10:37.868093 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" event={"ID":"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3","Type":"ContainerDied","Data":"7bc33b7c2b1946e110059cbe4954218f9184f3ca753a627372885a5ab41bc662"} Jan 27 16:10:38 crc kubenswrapper[4729]: I0127 16:10:38.051095 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:10:38 crc kubenswrapper[4729]: E0127 16:10:38.051542 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.033147 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.050111 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host\") pod \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.050308 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host" (OuterVolumeSpecName: "host") pod "4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" (UID: "4d7db9a5-fd5c-4862-9f5e-b8b71be255b3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.050327 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfqdc\" (UniqueName: \"kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc\") pod \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\" (UID: \"4d7db9a5-fd5c-4862-9f5e-b8b71be255b3\") " Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.051689 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.057925 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc" (OuterVolumeSpecName: "kube-api-access-gfqdc") pod "4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" (UID: "4d7db9a5-fd5c-4862-9f5e-b8b71be255b3"). InnerVolumeSpecName "kube-api-access-gfqdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.089772 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-kcpcn"] Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.101154 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-kcpcn"] Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.153805 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfqdc\" (UniqueName: \"kubernetes.io/projected/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3-kube-api-access-gfqdc\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.888482 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b63596f0f6ceee072dc74f8bf2f23c41336b8f85d59f2075b42e4dc93fe2b60" Jan 27 16:10:39 crc kubenswrapper[4729]: I0127 16:10:39.888538 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-kcpcn" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.067911 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" path="/var/lib/kubelet/pods/4d7db9a5-fd5c-4862-9f5e-b8b71be255b3/volumes" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.478348 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mjwht/crc-debug-ssctm"] Jan 27 16:10:40 crc kubenswrapper[4729]: E0127 16:10:40.484406 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" containerName="container-00" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.484453 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" containerName="container-00" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.484939 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7db9a5-fd5c-4862-9f5e-b8b71be255b3" containerName="container-00" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.486934 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.489700 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mjwht"/"default-dockercfg-lsw5z" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.591741 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.591975 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrdnq\" (UniqueName: \"kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.694041 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.694218 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrdnq\" (UniqueName: \"kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.694273 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.730563 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrdnq\" (UniqueName: \"kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq\") pod \"crc-debug-ssctm\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.810278 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:40 crc kubenswrapper[4729]: W0127 16:10:40.864763 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd70fd645_9ccf_4f63_b643_641657c9dcbe.slice/crio-78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125 WatchSource:0}: Error finding container 78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125: Status 404 returned error can't find the container with id 78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125 Jan 27 16:10:40 crc kubenswrapper[4729]: I0127 16:10:40.900733 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-ssctm" event={"ID":"d70fd645-9ccf-4f63-b643-641657c9dcbe","Type":"ContainerStarted","Data":"78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125"} Jan 27 16:10:41 crc kubenswrapper[4729]: I0127 16:10:41.915622 4729 generic.go:334] "Generic (PLEG): container finished" podID="d70fd645-9ccf-4f63-b643-641657c9dcbe" containerID="d3d97bb00e37c65c14d9a80c44eebf046ef8fa941197e017090ecd5b7cad025f" exitCode=0 Jan 27 16:10:41 crc kubenswrapper[4729]: I0127 16:10:41.915800 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-ssctm" event={"ID":"d70fd645-9ccf-4f63-b643-641657c9dcbe","Type":"ContainerDied","Data":"d3d97bb00e37c65c14d9a80c44eebf046ef8fa941197e017090ecd5b7cad025f"} Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.098296 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.258578 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrdnq\" (UniqueName: \"kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq\") pod \"d70fd645-9ccf-4f63-b643-641657c9dcbe\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.258970 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host\") pod \"d70fd645-9ccf-4f63-b643-641657c9dcbe\" (UID: \"d70fd645-9ccf-4f63-b643-641657c9dcbe\") " Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.261271 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host" (OuterVolumeSpecName: "host") pod "d70fd645-9ccf-4f63-b643-641657c9dcbe" (UID: "d70fd645-9ccf-4f63-b643-641657c9dcbe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.294058 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq" (OuterVolumeSpecName: "kube-api-access-wrdnq") pod "d70fd645-9ccf-4f63-b643-641657c9dcbe" (UID: "d70fd645-9ccf-4f63-b643-641657c9dcbe"). InnerVolumeSpecName "kube-api-access-wrdnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.363178 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrdnq\" (UniqueName: \"kubernetes.io/projected/d70fd645-9ccf-4f63-b643-641657c9dcbe-kube-api-access-wrdnq\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.363226 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d70fd645-9ccf-4f63-b643-641657c9dcbe-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.939942 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-ssctm" event={"ID":"d70fd645-9ccf-4f63-b643-641657c9dcbe","Type":"ContainerDied","Data":"78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125"} Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.940232 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78932ab897392ff1bc64a368623a57b401b56eb91c933fea77b9a3969b1dd125" Jan 27 16:10:43 crc kubenswrapper[4729]: I0127 16:10:43.940286 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-ssctm" Jan 27 16:10:44 crc kubenswrapper[4729]: I0127 16:10:44.164329 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-ssctm"] Jan 27 16:10:44 crc kubenswrapper[4729]: I0127 16:10:44.175823 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-ssctm"] Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.499633 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mjwht/crc-debug-c2wf7"] Jan 27 16:10:45 crc kubenswrapper[4729]: E0127 16:10:45.500612 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70fd645-9ccf-4f63-b643-641657c9dcbe" containerName="container-00" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.500633 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70fd645-9ccf-4f63-b643-641657c9dcbe" containerName="container-00" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.500959 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70fd645-9ccf-4f63-b643-641657c9dcbe" containerName="container-00" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.502102 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.504122 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mjwht"/"default-dockercfg-lsw5z" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.620181 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.620803 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slwq9\" (UniqueName: \"kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.723133 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.723288 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slwq9\" (UniqueName: \"kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.723389 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.746437 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slwq9\" (UniqueName: \"kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9\") pod \"crc-debug-c2wf7\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.825853 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:45 crc kubenswrapper[4729]: W0127 16:10:45.871967 4729 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bb4c620_cae7_40d5_8999_dd446f4b609f.slice/crio-ba15bba8b2b42e646a39c0d35834893522891bf98153adbbb092059ef44c8318 WatchSource:0}: Error finding container ba15bba8b2b42e646a39c0d35834893522891bf98153adbbb092059ef44c8318: Status 404 returned error can't find the container with id ba15bba8b2b42e646a39c0d35834893522891bf98153adbbb092059ef44c8318 Jan 27 16:10:45 crc kubenswrapper[4729]: I0127 16:10:45.964571 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" event={"ID":"8bb4c620-cae7-40d5-8999-dd446f4b609f","Type":"ContainerStarted","Data":"ba15bba8b2b42e646a39c0d35834893522891bf98153adbbb092059ef44c8318"} Jan 27 16:10:46 crc kubenswrapper[4729]: I0127 16:10:46.066337 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d70fd645-9ccf-4f63-b643-641657c9dcbe" path="/var/lib/kubelet/pods/d70fd645-9ccf-4f63-b643-641657c9dcbe/volumes" Jan 27 16:10:46 crc kubenswrapper[4729]: I0127 16:10:46.978278 4729 generic.go:334] "Generic (PLEG): container finished" podID="8bb4c620-cae7-40d5-8999-dd446f4b609f" containerID="608d029b89b1afc26b083b83f976c5f0c801df4adb78e9ff6e0424814f2f2d68" exitCode=0 Jan 27 16:10:46 crc kubenswrapper[4729]: I0127 16:10:46.978336 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" event={"ID":"8bb4c620-cae7-40d5-8999-dd446f4b609f","Type":"ContainerDied","Data":"608d029b89b1afc26b083b83f976c5f0c801df4adb78e9ff6e0424814f2f2d68"} Jan 27 16:10:47 crc kubenswrapper[4729]: I0127 16:10:47.025109 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-c2wf7"] Jan 27 16:10:47 crc kubenswrapper[4729]: I0127 16:10:47.055772 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mjwht/crc-debug-c2wf7"] Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.112613 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.187595 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host\") pod \"8bb4c620-cae7-40d5-8999-dd446f4b609f\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.187682 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host" (OuterVolumeSpecName: "host") pod "8bb4c620-cae7-40d5-8999-dd446f4b609f" (UID: "8bb4c620-cae7-40d5-8999-dd446f4b609f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.188321 4729 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8bb4c620-cae7-40d5-8999-dd446f4b609f-host\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.289126 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slwq9\" (UniqueName: \"kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9\") pod \"8bb4c620-cae7-40d5-8999-dd446f4b609f\" (UID: \"8bb4c620-cae7-40d5-8999-dd446f4b609f\") " Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.294902 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9" (OuterVolumeSpecName: "kube-api-access-slwq9") pod "8bb4c620-cae7-40d5-8999-dd446f4b609f" (UID: "8bb4c620-cae7-40d5-8999-dd446f4b609f"). InnerVolumeSpecName "kube-api-access-slwq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:10:48 crc kubenswrapper[4729]: I0127 16:10:48.392220 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slwq9\" (UniqueName: \"kubernetes.io/projected/8bb4c620-cae7-40d5-8999-dd446f4b609f-kube-api-access-slwq9\") on node \"crc\" DevicePath \"\"" Jan 27 16:10:49 crc kubenswrapper[4729]: I0127 16:10:49.001459 4729 scope.go:117] "RemoveContainer" containerID="608d029b89b1afc26b083b83f976c5f0c801df4adb78e9ff6e0424814f2f2d68" Jan 27 16:10:49 crc kubenswrapper[4729]: I0127 16:10:49.001918 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/crc-debug-c2wf7" Jan 27 16:10:50 crc kubenswrapper[4729]: I0127 16:10:50.064355 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb4c620-cae7-40d5-8999-dd446f4b609f" path="/var/lib/kubelet/pods/8bb4c620-cae7-40d5-8999-dd446f4b609f/volumes" Jan 27 16:10:53 crc kubenswrapper[4729]: I0127 16:10:53.050791 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:10:53 crc kubenswrapper[4729]: E0127 16:10:53.051782 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:11:05 crc kubenswrapper[4729]: I0127 16:11:05.051097 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:11:05 crc kubenswrapper[4729]: E0127 16:11:05.051926 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:11:20 crc kubenswrapper[4729]: I0127 16:11:20.051291 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:11:20 crc kubenswrapper[4729]: E0127 16:11:20.052244 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.064807 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-api/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.115547 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-evaluator/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.387941 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-notifier/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.421785 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e0df588d-304c-41cb-b6bd-9d7d5987ebef/aodh-listener/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.605591 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b46b856c4-72fkv_260a1ff1-928b-446f-9480-fb8d8fe342f1/barbican-api/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.711017 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b46b856c4-72fkv_260a1ff1-928b-446f-9480-fb8d8fe342f1/barbican-api-log/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.747146 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74fdd6f9c6-65ljj_0c7b3357-e7e9-415b-8253-7ee68b4149a0/barbican-keystone-listener/0.log" Jan 27 16:11:28 crc kubenswrapper[4729]: I0127 16:11:28.985035 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-74fdd6f9c6-65ljj_0c7b3357-e7e9-415b-8253-7ee68b4149a0/barbican-keystone-listener-log/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.005601 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8475c76cbc-gtz96_3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400/barbican-worker/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.019779 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-8475c76cbc-gtz96_3b2bc70f-ea6e-4cbb-9f55-4dcf799ad400/barbican-worker-log/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.301414 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xlpww_9795f0ec-6b8d-4470-bd63-584192019fcf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.311146 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/ceilometer-central-agent/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.589053 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/sg-core/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.603866 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/ceilometer-notification-agent/0.log" Jan 27 16:11:29 crc kubenswrapper[4729]: I0127 16:11:29.728653 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_97bf3a8e-2abb-4659-9719-fdffb80a92b1/proxy-httpd/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.232953 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_28e34a55-2a5a-4da3-8f4e-ece70df636e2/cinder-api-log/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.360066 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/cinder-scheduler/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.361221 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c2742dbf-31d5-4550-88df-d1b01e4f7dc4/probe/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.401774 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_28e34a55-2a5a-4da3-8f4e-ece70df636e2/cinder-api/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.709703 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-28rl7_32728732-3b43-4a8e-9f61-f028fd4b3d74/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:30 crc kubenswrapper[4729]: I0127 16:11:30.838156 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vgk5t_6cc5ced3-d419-4224-a474-bd34874d18dc/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.002035 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/init/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.276395 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xsf7g_657d96d8-d313-4860-acae-64d35608cd5d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.313056 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/dnsmasq-dns/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.546266 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-p9jfn_31add6d0-b976-4106-93f2-d9f13b3de020/init/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.888975 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2132f727-3016-42f6-ba30-864e70540513/glance-log/0.log" Jan 27 16:11:31 crc kubenswrapper[4729]: I0127 16:11:31.950626 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2132f727-3016-42f6-ba30-864e70540513/glance-httpd/0.log" Jan 27 16:11:32 crc kubenswrapper[4729]: I0127 16:11:32.183012 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e371e969-2ec7-42fe-95bd-5765dc511224/glance-httpd/0.log" Jan 27 16:11:32 crc kubenswrapper[4729]: I0127 16:11:32.211477 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e371e969-2ec7-42fe-95bd-5765dc511224/glance-log/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.081672 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5c2fr_2121d941-3524-4b71-ac16-41f4679e3525/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.285489 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-795d549794-t2xb4_5271075b-f655-47d8-b621-44711d9e495c/heat-engine/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.445420 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7dfxr_ebfb952d-e5d5-4ce8-9eb7-49f058023970/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.514353 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5ff89df78c-6425l_be6cee48-8743-49f2-a13b-6ce80981cfdb/heat-api/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.565023 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7fff5b4d49-29sw4_91704ade-1ead-4e59-b743-f93c932a4450/heat-cfnapi/0.log" Jan 27 16:11:33 crc kubenswrapper[4729]: I0127 16:11:33.838117 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492101-z8cpg_1c34a1bb-cfec-4b86-af1a-b633dd398427/keystone-cron/0.log" Jan 27 16:11:34 crc kubenswrapper[4729]: I0127 16:11:34.056843 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-79f67449f-t7hgq_66c77355-de9a-4aab-8d65-504c74911382/keystone-api/0.log" Jan 27 16:11:34 crc kubenswrapper[4729]: I0127 16:11:34.093205 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29492161-ssgd2_a6627c66-78f6-432d-83fb-20578f0e7acb/keystone-cron/0.log" Jan 27 16:11:34 crc kubenswrapper[4729]: I0127 16:11:34.249196 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_51fc79dc-e632-414e-a354-54c3bfd2eb8d/kube-state-metrics/0.log" Jan 27 16:11:34 crc kubenswrapper[4729]: I0127 16:11:34.373068 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vtdxw_33c4c74a-3a24-43e4-94ff-84a794d0db7d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:34 crc kubenswrapper[4729]: I0127 16:11:34.590145 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-b4nnk_e771e774-7470-4b36-a60e-bab34a04185a/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.051076 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.306496 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_3c869358-ae88-4f4a-9317-4e1176fdb199/mysqld-exporter/0.log" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.591279 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ssqpf_067cab76-3d24-4a20-a016-0141d54181a2/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.663607 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85dcfc7bf5-fs787_8b358632-8eef-4842-91bc-9c69460a5dea/neutron-httpd/0.log" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.721561 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-85dcfc7bf5-fs787_8b358632-8eef-4842-91bc-9c69460a5dea/neutron-api/0.log" Jan 27 16:11:35 crc kubenswrapper[4729]: I0127 16:11:35.746417 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757"} Jan 27 16:11:36 crc kubenswrapper[4729]: I0127 16:11:36.509464 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76dbfd5b-82fa-4998-bd3b-6ead39c5f73b/nova-cell0-conductor-conductor/0.log" Jan 27 16:11:36 crc kubenswrapper[4729]: I0127 16:11:36.799221 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_48379ff4-1d0a-400d-a40b-a3ed65415c39/nova-api-log/0.log" Jan 27 16:11:37 crc kubenswrapper[4729]: I0127 16:11:37.010343 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e8c150bd-4541-46f0-8c70-1e5482e6b3f3/nova-cell1-conductor-conductor/0.log" Jan 27 16:11:37 crc kubenswrapper[4729]: I0127 16:11:37.280828 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9201a195-6f0c-4521-a4d9-a31706dbedce/nova-cell1-novncproxy-novncproxy/0.log" Jan 27 16:11:37 crc kubenswrapper[4729]: I0127 16:11:37.622051 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-skhnk_d4f6bdbc-1305-4c66-8d8c-a3425163fd27/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:37 crc kubenswrapper[4729]: I0127 16:11:37.700698 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_48379ff4-1d0a-400d-a40b-a3ed65415c39/nova-api-api/0.log" Jan 27 16:11:37 crc kubenswrapper[4729]: I0127 16:11:37.717011 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5945a095-d047-46d4-aa7d-3989268e88f9/nova-metadata-log/0.log" Jan 27 16:11:38 crc kubenswrapper[4729]: I0127 16:11:38.231902 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/mysql-bootstrap/0.log" Jan 27 16:11:38 crc kubenswrapper[4729]: I0127 16:11:38.367277 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_96379486-5600-4752-9729-0fc090685ea4/nova-scheduler-scheduler/0.log" Jan 27 16:11:38 crc kubenswrapper[4729]: I0127 16:11:38.542576 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/galera/0.log" Jan 27 16:11:38 crc kubenswrapper[4729]: I0127 16:11:38.553741 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_37137af3-5865-4774-a6bc-4a96bb11a68d/mysql-bootstrap/0.log" Jan 27 16:11:38 crc kubenswrapper[4729]: I0127 16:11:38.848439 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/mysql-bootstrap/0.log" Jan 27 16:11:39 crc kubenswrapper[4729]: I0127 16:11:39.329751 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/mysql-bootstrap/0.log" Jan 27 16:11:39 crc kubenswrapper[4729]: I0127 16:11:39.376852 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e387f91f-9a73-4c8b-8e0b-31ed4c3874ba/galera/0.log" Jan 27 16:11:39 crc kubenswrapper[4729]: I0127 16:11:39.593469 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_0b4b3ce4-58fb-430f-8465-ca0a501a6aba/openstackclient/0.log" Jan 27 16:11:39 crc kubenswrapper[4729]: I0127 16:11:39.758253 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gk2cz_5e4b5a47-ff01-4fd6-b69f-4d70efc77a12/ovn-controller/0.log" Jan 27 16:11:40 crc kubenswrapper[4729]: I0127 16:11:40.028779 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-ngktk_0eab35a0-e5dd-4c49-9d7b-9f8f0722e754/openstack-network-exporter/0.log" Jan 27 16:11:40 crc kubenswrapper[4729]: I0127 16:11:40.286476 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server-init/0.log" Jan 27 16:11:40 crc kubenswrapper[4729]: I0127 16:11:40.607076 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server-init/0.log" Jan 27 16:11:40 crc kubenswrapper[4729]: I0127 16:11:40.688704 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovs-vswitchd/0.log" Jan 27 16:11:40 crc kubenswrapper[4729]: I0127 16:11:40.752269 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gsqqc_aff22ed6-2491-4c78-94da-02f4b51493b8/ovsdb-server/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.044791 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mqwsj_78f36cea-77c5-44dd-9952-6392811d2d40/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.211304 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c1da68b-399e-4543-918f-6deed78e3626/openstack-network-exporter/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.327363 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6c1da68b-399e-4543-918f-6deed78e3626/ovn-northd/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.391412 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_5945a095-d047-46d4-aa7d-3989268e88f9/nova-metadata-metadata/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.473788 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_37a67feb-a317-4a04-af97-028064ca39da/openstack-network-exporter/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.590504 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_37a67feb-a317-4a04-af97-028064ca39da/ovsdbserver-nb/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.725602 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3290bc53-f838-4b2f-9f5a-053331751546/openstack-network-exporter/0.log" Jan 27 16:11:41 crc kubenswrapper[4729]: I0127 16:11:41.801969 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_3290bc53-f838-4b2f-9f5a-053331751546/ovsdbserver-sb/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.124419 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb844499b-jdr2d_6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2/placement-api/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.178591 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/init-config-reloader/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.218010 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7cb844499b-jdr2d_6c1c7268-37a4-45d9-aa5d-bcf79fb55bd2/placement-log/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.417593 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/init-config-reloader/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.480335 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/prometheus/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.498401 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/config-reloader/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.525841 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_579561e5-ea93-4bb7-bf73-5107d60a62b9/thanos-sidecar/0.log" Jan 27 16:11:42 crc kubenswrapper[4729]: I0127 16:11:42.736684 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/setup-container/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.048088 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/rabbitmq/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.396053 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fd5e94dd-63e8-4aa9-8004-bc2bdc8a9464/setup-container/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.400280 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/setup-container/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.714338 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/setup-container/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.719070 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a360fb27-d7b1-4d42-9889-f47c87012e2e/rabbitmq/0.log" Jan 27 16:11:43 crc kubenswrapper[4729]: I0127 16:11:43.790354 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/setup-container/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.010951 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/setup-container/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.113938 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/setup-container/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.123840 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_0fdb2033-6ffe-46d3-8eaf-1e824a9a47c1/rabbitmq/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.309149 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/setup-container/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.481115 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_75b1f41d-64ad-4dec-a082-9e81438dfe0f/rabbitmq/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.522417 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-k5r9t_0860b8dc-10f3-41e7-8f6e-231f28f3cea6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.848467 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-dtn99_3e41d73e-3c88-46b1-a91a-1ac51fd2a4bc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:44 crc kubenswrapper[4729]: I0127 16:11:44.925042 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8qcdw_90776e5b-71cf-43d1-969b-16278afed3cf/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.121718 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-qz44w_af66e59e-8967-4730-a4ca-9ff115554d5b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.206358 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-2nstj_27dd40db-176b-45b4-a886-967fcb9ce2df/ssh-known-hosts-edpm-deployment/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.548785 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-849d9cdd4f-w5qzz_39caa2da-8dac-4581-8e89-2b7f3b013b8c/proxy-server/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.699324 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-849d9cdd4f-w5qzz_39caa2da-8dac-4581-8e89-2b7f3b013b8c/proxy-httpd/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.722675 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-z4znt_fb94bfab-bf68-4e03-9a32-b4de4d765b1f/swift-ring-rebalance/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.820265 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-auditor/0.log" Jan 27 16:11:45 crc kubenswrapper[4729]: I0127 16:11:45.971595 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-reaper/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.032575 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-replicator/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.086086 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/account-server/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.170145 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-auditor/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.305398 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-server/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.349184 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-replicator/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.408323 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/container-updater/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.454933 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-auditor/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.641424 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-expirer/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.666261 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-server/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.677071 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-replicator/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.712749 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/object-updater/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.881951 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/swift-recon-cron/0.log" Jan 27 16:11:46 crc kubenswrapper[4729]: I0127 16:11:46.898779 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_399f6c9f-a3d5-4235-bce9-f3623e6be7f4/rsync/0.log" Jan 27 16:11:47 crc kubenswrapper[4729]: I0127 16:11:47.061551 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-5hhgp_5639c133-4cde-40dc-a7f3-e716aaab5ca8/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:47 crc kubenswrapper[4729]: I0127 16:11:47.223426 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-xqf64_0b30179d-d4ac-44b0-9675-7f0ef071caf5/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:47 crc kubenswrapper[4729]: I0127 16:11:47.733848 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_d639ea1e-42fa-4467-9d0c-1f66c65c108f/test-operator-logs-container/0.log" Jan 27 16:11:47 crc kubenswrapper[4729]: I0127 16:11:47.995111 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5vr2m_78c1c1f8-7784-4bd4-8dce-7ce0fafbbc6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 27 16:11:48 crc kubenswrapper[4729]: I0127 16:11:48.603106 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_440fdd61-ad16-4ee7-bf64-2754db1c5db8/tempest-tests-tempest-tests-runner/0.log" Jan 27 16:11:58 crc kubenswrapper[4729]: I0127 16:11:58.106081 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_041a96ab-9f21-4d02-80df-cf7d6a81323b/memcached/0.log" Jan 27 16:12:02 crc kubenswrapper[4729]: I0127 16:12:02.753784 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-ml79v" podUID="0015219d-ee39-40ea-896a-944b4b45e46b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 16:12:23 crc kubenswrapper[4729]: I0127 16:12:23.947851 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.233671 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.235438 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.240843 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.442313 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/util/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.494402 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/pull/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.498059 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1a42191e94a65b5e0a08cdc631f59c93fe0ad532306b23bdb4d37ace67g6ltk_384fac64-6243-404c-a413-49b548d4e510/extract/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.811733 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-fcvrz_a27299b3-aeb1-4014-a145-6b5b908542fc/manager/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.816428 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-m7jfx_27edcc9a-7976-42bd-9e8b-a7c95936f305/manager/0.log" Jan 27 16:12:24 crc kubenswrapper[4729]: I0127 16:12:24.974178 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4bpwj_6818e775-019d-4bda-94ba-b7e550c9a127/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.064171 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-ck286_c9cd8871-5d83-436f-b787-a8769327429d/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.275259 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-5csls_48e0b394-ae44-484e-821f-b821cd11c656/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.386866 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-qlm8l_64a73e98-23a2-4634-ba0f-fcf5389e38e1/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.687184 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-rnnng_53268481-b675-416f-a9d3-343d349e3bb4/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.916540 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-vm6sf_ef99dd2b-4274-4277-8517-c748ef232c38/manager/0.log" Jan 27 16:12:25 crc kubenswrapper[4729]: I0127 16:12:25.986856 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-h98cg_49b9ec9d-9998-465c-b62f-5c97d5913dd7/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.002149 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bfbgb_80666255-494b-4c9a-8434-49c509505a32/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.228610 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-hn29j_e0d9910b-f1f9-4f1e-b920-dd1c3c787f78/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.299624 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-54vz9_eb48ac92-5355-41a2-bdce-f70e47cb91d9/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.576811 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-dpnb5_d9c052a4-bb18-4634-8acd-13d899dcc8af/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.582087 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-w79q5_73a5b611-6e78-44bf-94ad-2a1fdf4a4819/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.746323 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854mqwj7_5a91ffd3-fab5-40f6-b808-7d0fd80888aa/manager/0.log" Jan 27 16:12:26 crc kubenswrapper[4729]: I0127 16:12:26.972844 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-77cf586fbc-wj4vn_3cda87f1-f88f-4ade-a0fd-d0359a00e665/operator/0.log" Jan 27 16:12:27 crc kubenswrapper[4729]: I0127 16:12:27.250200 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7hw47_19dcb1bc-8570-40b5-9493-349fc2ea4cc0/registry-server/0.log" Jan 27 16:12:27 crc kubenswrapper[4729]: I0127 16:12:27.456949 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-7q6dx_e1ef0def-8b43-404c-a20f-ccffb028796d/manager/0.log" Jan 27 16:12:27 crc kubenswrapper[4729]: I0127 16:12:27.744815 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-22t59_a393649b-f1a3-44fb-9cb8-a289fcc3f01f/manager/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.252796 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-khqrr_6e8131d6-585c-43f8-9231-204ef68de1ba/operator/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.561795 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-m4z7c_60c59a6c-9eb5-4869-8d50-2cb234912d6b/manager/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.687235 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-858d4757d5-qn8zm_4a222b58-d97f-4d40-9bb4-517b4798eb07/manager/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.924944 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66f997549c-st8m2_8eb4b08e-edb2-4db8-af1e-549e8e1396d1/manager/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.938172 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-9c4lk_7f77a2ce-03ee-4d74-a7df-052255e0f337/manager/0.log" Jan 27 16:12:28 crc kubenswrapper[4729]: I0127 16:12:28.961488 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-mxmhp_d5d5726b-1680-44de-9752-2e56e45a3d12/manager/0.log" Jan 27 16:12:51 crc kubenswrapper[4729]: I0127 16:12:51.070805 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rjlbl_6ce0b622-7220-4c64-ba53-83fe3255d20c/control-plane-machine-set-operator/0.log" Jan 27 16:12:51 crc kubenswrapper[4729]: I0127 16:12:51.163531 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t2v8z_46d3221c-be55-4ab8-95f1-f55bc1eb6596/kube-rbac-proxy/0.log" Jan 27 16:12:51 crc kubenswrapper[4729]: I0127 16:12:51.257533 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t2v8z_46d3221c-be55-4ab8-95f1-f55bc1eb6596/machine-api-operator/0.log" Jan 27 16:12:57 crc kubenswrapper[4729]: I0127 16:12:57.051343 4729 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.340939166s: [/var/lib/containers/storage/overlay/fa3b3cfa1c46355e299eac6698bd918cd2617012510480397c7554ccdd5f21f6/diff /var/log/pods/openstack_openstackclient_0b4b3ce4-58fb-430f-8465-ca0a501a6aba/openstackclient/0.log]; will not log again for this container unless duration exceeds 3s Jan 27 16:13:05 crc kubenswrapper[4729]: I0127 16:13:05.323578 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-68w6c_fa223173-c466-46fc-a84d-25e55838018e/cert-manager-controller/0.log" Jan 27 16:13:05 crc kubenswrapper[4729]: I0127 16:13:05.753531 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kgx4d_ccab51b9-7558-4837-b6f3-f7727538fbd5/cert-manager-cainjector/0.log" Jan 27 16:13:05 crc kubenswrapper[4729]: I0127 16:13:05.914007 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8pvns_e0c7f80f-6d2c-4806-a4ea-192d40937ea3/cert-manager-webhook/0.log" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.578032 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:13:06 crc kubenswrapper[4729]: E0127 16:13:06.578625 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb4c620-cae7-40d5-8999-dd446f4b609f" containerName="container-00" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.578645 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb4c620-cae7-40d5-8999-dd446f4b609f" containerName="container-00" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.579205 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb4c620-cae7-40d5-8999-dd446f4b609f" containerName="container-00" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.582187 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.668138 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.670983 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.671065 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.671328 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ch69\" (UniqueName: \"kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.774028 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ch69\" (UniqueName: \"kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.774145 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.774198 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.774617 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.774649 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.806977 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ch69\" (UniqueName: \"kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69\") pod \"redhat-operators-9wcb9\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:06 crc kubenswrapper[4729]: I0127 16:13:06.916183 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:09 crc kubenswrapper[4729]: I0127 16:13:09.601490 4729 patch_prober.go:28] interesting pod/logging-loki-gateway-5955fd6cd7-hds4m container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" start-of-body= Jan 27 16:13:09 crc kubenswrapper[4729]: I0127 16:13:09.602070 4729 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-5955fd6cd7-hds4m" podUID="0c13a35c-2b09-4ffa-a6e5-10ba4311f962" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" Jan 27 16:13:09 crc kubenswrapper[4729]: I0127 16:13:09.732952 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:13:09 crc kubenswrapper[4729]: I0127 16:13:09.908096 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerStarted","Data":"a0d47ccb7073dd9069c3fa10997b06b5061b34714a446f8b6c59b1cd3d00d9e5"} Jan 27 16:13:10 crc kubenswrapper[4729]: I0127 16:13:10.922967 4729 generic.go:334] "Generic (PLEG): container finished" podID="6534e857-866c-40e1-bd14-714a0bfae664" containerID="044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404" exitCode=0 Jan 27 16:13:10 crc kubenswrapper[4729]: I0127 16:13:10.923228 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerDied","Data":"044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404"} Jan 27 16:13:10 crc kubenswrapper[4729]: I0127 16:13:10.930048 4729 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 16:13:12 crc kubenswrapper[4729]: I0127 16:13:12.947289 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerStarted","Data":"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a"} Jan 27 16:13:19 crc kubenswrapper[4729]: I0127 16:13:19.008072 4729 generic.go:334] "Generic (PLEG): container finished" podID="6534e857-866c-40e1-bd14-714a0bfae664" containerID="20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a" exitCode=0 Jan 27 16:13:19 crc kubenswrapper[4729]: I0127 16:13:19.008193 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerDied","Data":"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a"} Jan 27 16:13:21 crc kubenswrapper[4729]: I0127 16:13:21.037470 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerStarted","Data":"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe"} Jan 27 16:13:21 crc kubenswrapper[4729]: I0127 16:13:21.071497 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wcb9" podStartSLOduration=6.500714306 podStartE2EDuration="15.069284964s" podCreationTimestamp="2026-01-27 16:13:06 +0000 UTC" firstStartedPulling="2026-01-27 16:13:10.926485328 +0000 UTC m=+7677.510676332" lastFinishedPulling="2026-01-27 16:13:19.495055986 +0000 UTC m=+7686.079246990" observedRunningTime="2026-01-27 16:13:21.057867887 +0000 UTC m=+7687.642058891" watchObservedRunningTime="2026-01-27 16:13:21.069284964 +0000 UTC m=+7687.653475978" Jan 27 16:13:22 crc kubenswrapper[4729]: I0127 16:13:22.912315 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-hvk8w_08c43db0-20ec-4a56-bd40-718173782b7b/nmstate-console-plugin/0.log" Jan 27 16:13:23 crc kubenswrapper[4729]: I0127 16:13:23.128250 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bbqst_15d3bff1-3e83-4e70-8118-ca0163c18e48/nmstate-handler/0.log" Jan 27 16:13:23 crc kubenswrapper[4729]: I0127 16:13:23.268929 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7cmw2_9ad69054-310c-4725-8991-d6bd0ace768d/kube-rbac-proxy/0.log" Jan 27 16:13:23 crc kubenswrapper[4729]: I0127 16:13:23.337412 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-7cmw2_9ad69054-310c-4725-8991-d6bd0ace768d/nmstate-metrics/0.log" Jan 27 16:13:23 crc kubenswrapper[4729]: I0127 16:13:23.420987 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-pg62x_5bbadca7-3b66-4264-8217-5f246163b41e/nmstate-operator/0.log" Jan 27 16:13:23 crc kubenswrapper[4729]: I0127 16:13:23.596985 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-t7h46_7b15636d-99af-4c7d-80a8-b179de0709d7/nmstate-webhook/0.log" Jan 27 16:13:26 crc kubenswrapper[4729]: I0127 16:13:26.918418 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:26 crc kubenswrapper[4729]: I0127 16:13:26.919091 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:13:27 crc kubenswrapper[4729]: I0127 16:13:27.995583 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wcb9" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" probeResult="failure" output=< Jan 27 16:13:27 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:13:27 crc kubenswrapper[4729]: > Jan 27 16:13:37 crc kubenswrapper[4729]: I0127 16:13:37.972899 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wcb9" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" probeResult="failure" output=< Jan 27 16:13:37 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:13:37 crc kubenswrapper[4729]: > Jan 27 16:13:38 crc kubenswrapper[4729]: I0127 16:13:38.312828 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/kube-rbac-proxy/0.log" Jan 27 16:13:38 crc kubenswrapper[4729]: I0127 16:13:38.407542 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/manager/0.log" Jan 27 16:13:47 crc kubenswrapper[4729]: I0127 16:13:47.973377 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wcb9" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" probeResult="failure" output=< Jan 27 16:13:47 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:13:47 crc kubenswrapper[4729]: > Jan 27 16:13:51 crc kubenswrapper[4729]: I0127 16:13:51.951580 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jn5b_e5a4281d-dad0-47ba-b48c-cb8a18c57552/prometheus-operator/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.147496 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_009d21ee-b5c2-4d71-8a58-fc2643442532/prometheus-operator-admission-webhook/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.241114 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_64ad3df0-d3a7-446f-a7d9-6c4194d92071/prometheus-operator-admission-webhook/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.398499 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gcmgr_5b2e021c-d93d-45b1-81be-040aa9ab8ada/operator/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.457144 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m2dsv_be65005b-48eb-45fe-b1e7-f5b5416fd8f3/observability-ui-dashboards/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.607269 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p5mb2_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31/perses-operator/0.log" Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.655333 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:13:52 crc kubenswrapper[4729]: I0127 16:13:52.655399 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:13:57 crc kubenswrapper[4729]: I0127 16:13:57.969459 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9wcb9" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" probeResult="failure" output=< Jan 27 16:13:57 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:13:57 crc kubenswrapper[4729]: > Jan 27 16:14:06 crc kubenswrapper[4729]: I0127 16:14:06.978716 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:14:07 crc kubenswrapper[4729]: I0127 16:14:07.033957 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:14:07 crc kubenswrapper[4729]: I0127 16:14:07.787046 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:14:08 crc kubenswrapper[4729]: I0127 16:14:08.614719 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wcb9" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" containerID="cri-o://90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe" gracePeriod=2 Jan 27 16:14:08 crc kubenswrapper[4729]: I0127 16:14:08.618208 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-6vccq_1b55fd12-cb85-45bc-aad0-b2326d50aed1/cluster-logging-operator/0.log" Jan 27 16:14:08 crc kubenswrapper[4729]: I0127 16:14:08.790475 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-p56mj_6bf88053-f822-4735-b8df-cfd1622aad97/collector/0.log" Jan 27 16:14:08 crc kubenswrapper[4729]: I0127 16:14:08.839006 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_cc44c481-9e30-42f7-883b-209184e04fba/loki-compactor/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.028794 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-c62w8_c05d5a86-89ad-486f-b7dd-404906e2ae3b/loki-distributor/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.166833 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-hds4m_0c13a35c-2b09-4ffa-a6e5-10ba4311f962/gateway/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.259844 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.268710 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-hds4m_0c13a35c-2b09-4ffa-a6e5-10ba4311f962/opa/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.268940 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-jf45j_f7d912e8-1da3-439c-9e59-66145d48e35c/gateway/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.346572 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content\") pod \"6534e857-866c-40e1-bd14-714a0bfae664\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.346715 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities\") pod \"6534e857-866c-40e1-bd14-714a0bfae664\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.346902 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ch69\" (UniqueName: \"kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69\") pod \"6534e857-866c-40e1-bd14-714a0bfae664\" (UID: \"6534e857-866c-40e1-bd14-714a0bfae664\") " Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.352466 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities" (OuterVolumeSpecName: "utilities") pod "6534e857-866c-40e1-bd14-714a0bfae664" (UID: "6534e857-866c-40e1-bd14-714a0bfae664"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.375845 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69" (OuterVolumeSpecName: "kube-api-access-7ch69") pod "6534e857-866c-40e1-bd14-714a0bfae664" (UID: "6534e857-866c-40e1-bd14-714a0bfae664"). InnerVolumeSpecName "kube-api-access-7ch69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.427392 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5955fd6cd7-jf45j_f7d912e8-1da3-439c-9e59-66145d48e35c/opa/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.450010 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.450064 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ch69\" (UniqueName: \"kubernetes.io/projected/6534e857-866c-40e1-bd14-714a0bfae664-kube-api-access-7ch69\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.504057 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6534e857-866c-40e1-bd14-714a0bfae664" (UID: "6534e857-866c-40e1-bd14-714a0bfae664"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.552405 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6534e857-866c-40e1-bd14-714a0bfae664-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.582773 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_cb7f1542-ef3d-4033-9345-6c504620a57e/loki-index-gateway/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.624947 4729 generic.go:334] "Generic (PLEG): container finished" podID="6534e857-866c-40e1-bd14-714a0bfae664" containerID="90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe" exitCode=0 Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.624994 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerDied","Data":"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe"} Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.625026 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wcb9" event={"ID":"6534e857-866c-40e1-bd14-714a0bfae664","Type":"ContainerDied","Data":"a0d47ccb7073dd9069c3fa10997b06b5061b34714a446f8b6c59b1cd3d00d9e5"} Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.625039 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wcb9" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.626212 4729 scope.go:117] "RemoveContainer" containerID="90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.670120 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.681509 4729 scope.go:117] "RemoveContainer" containerID="20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.688188 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wcb9"] Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.721234 4729 scope.go:117] "RemoveContainer" containerID="044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.800075 4729 scope.go:117] "RemoveContainer" containerID="90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe" Jan 27 16:14:09 crc kubenswrapper[4729]: E0127 16:14:09.802034 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe\": container with ID starting with 90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe not found: ID does not exist" containerID="90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.802091 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe"} err="failed to get container status \"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe\": rpc error: code = NotFound desc = could not find container \"90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe\": container with ID starting with 90094d886a2326309ecfb8cc2fc4d6125dd5a0070a482b864c5f517858aefffe not found: ID does not exist" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.802115 4729 scope.go:117] "RemoveContainer" containerID="20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a" Jan 27 16:14:09 crc kubenswrapper[4729]: E0127 16:14:09.803610 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a\": container with ID starting with 20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a not found: ID does not exist" containerID="20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.803653 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a"} err="failed to get container status \"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a\": rpc error: code = NotFound desc = could not find container \"20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a\": container with ID starting with 20fa9a41b2f5c9693dfec6992c7fbe5340620787573b593ada987abde744946a not found: ID does not exist" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.803685 4729 scope.go:117] "RemoveContainer" containerID="044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404" Jan 27 16:14:09 crc kubenswrapper[4729]: E0127 16:14:09.806266 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404\": container with ID starting with 044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404 not found: ID does not exist" containerID="044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.806304 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404"} err="failed to get container status \"044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404\": rpc error: code = NotFound desc = could not find container \"044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404\": container with ID starting with 044387880e7f76d910e902c8195eebffedfe07b0b6213fd3d8efef2177b40404 not found: ID does not exist" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.821955 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-jk5rc_c529bcb3-c119-47c9-8311-53d2c13f5ddb/loki-querier/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.841462 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_7f768b2c-e000-4052-9e92-82a3bde68514/loki-ingester/0.log" Jan 27 16:14:09 crc kubenswrapper[4729]: I0127 16:14:09.998386 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-g8jsr_0c35c4d5-cfb1-4d36-b502-5a9102ac0886/loki-query-frontend/0.log" Jan 27 16:14:10 crc kubenswrapper[4729]: I0127 16:14:10.084784 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6534e857-866c-40e1-bd14-714a0bfae664" path="/var/lib/kubelet/pods/6534e857-866c-40e1-bd14-714a0bfae664/volumes" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.191950 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:12 crc kubenswrapper[4729]: E0127 16:14:12.192770 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.192783 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" Jan 27 16:14:12 crc kubenswrapper[4729]: E0127 16:14:12.192810 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="extract-content" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.192816 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="extract-content" Jan 27 16:14:12 crc kubenswrapper[4729]: E0127 16:14:12.192836 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="extract-utilities" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.192843 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="extract-utilities" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.193089 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="6534e857-866c-40e1-bd14-714a0bfae664" containerName="registry-server" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.194784 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.208331 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.219337 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.219387 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.219444 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mxl\" (UniqueName: \"kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.321834 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.321939 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.322038 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mxl\" (UniqueName: \"kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.322372 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.322448 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.348631 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mxl\" (UniqueName: \"kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl\") pod \"redhat-marketplace-fbmh8\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:12 crc kubenswrapper[4729]: I0127 16:14:12.521081 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:13 crc kubenswrapper[4729]: I0127 16:14:13.057481 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:13 crc kubenswrapper[4729]: I0127 16:14:13.694270 4729 generic.go:334] "Generic (PLEG): container finished" podID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerID="1528e7d0dff0b9e26b72e0d915e14dbe8bffe5e5387f6994046cfda14c77465a" exitCode=0 Jan 27 16:14:13 crc kubenswrapper[4729]: I0127 16:14:13.694561 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerDied","Data":"1528e7d0dff0b9e26b72e0d915e14dbe8bffe5e5387f6994046cfda14c77465a"} Jan 27 16:14:13 crc kubenswrapper[4729]: I0127 16:14:13.694599 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerStarted","Data":"74d9f03f6312a7dfb3f5ba439b4c6529d6ef12454958669fef847274d3501629"} Jan 27 16:14:14 crc kubenswrapper[4729]: I0127 16:14:14.705954 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerStarted","Data":"da84de481564f803f69cc1667f124c38a6a830d63ebaa51e9eff769c90f07464"} Jan 27 16:14:15 crc kubenswrapper[4729]: I0127 16:14:15.721176 4729 generic.go:334] "Generic (PLEG): container finished" podID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerID="da84de481564f803f69cc1667f124c38a6a830d63ebaa51e9eff769c90f07464" exitCode=0 Jan 27 16:14:15 crc kubenswrapper[4729]: I0127 16:14:15.721559 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerDied","Data":"da84de481564f803f69cc1667f124c38a6a830d63ebaa51e9eff769c90f07464"} Jan 27 16:14:16 crc kubenswrapper[4729]: I0127 16:14:16.735316 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerStarted","Data":"07039756a2e293281367f4f152c94b4d030d0f334446954d986f014d3ed18b2b"} Jan 27 16:14:16 crc kubenswrapper[4729]: I0127 16:14:16.764059 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fbmh8" podStartSLOduration=2.321637289 podStartE2EDuration="4.764040364s" podCreationTimestamp="2026-01-27 16:14:12 +0000 UTC" firstStartedPulling="2026-01-27 16:14:13.701424919 +0000 UTC m=+7740.285615923" lastFinishedPulling="2026-01-27 16:14:16.143827994 +0000 UTC m=+7742.728018998" observedRunningTime="2026-01-27 16:14:16.755350939 +0000 UTC m=+7743.339541963" watchObservedRunningTime="2026-01-27 16:14:16.764040364 +0000 UTC m=+7743.348231388" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.521181 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.521739 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.569992 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.654748 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.654803 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.840500 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:22 crc kubenswrapper[4729]: I0127 16:14:22.895188 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:24 crc kubenswrapper[4729]: I0127 16:14:24.816460 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fbmh8" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="registry-server" containerID="cri-o://07039756a2e293281367f4f152c94b4d030d0f334446954d986f014d3ed18b2b" gracePeriod=2 Jan 27 16:14:24 crc kubenswrapper[4729]: I0127 16:14:24.820457 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wdllv_c39bec12-16a0-40f8-996b-ca212fedccc2/kube-rbac-proxy/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.068825 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.125631 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wdllv_c39bec12-16a0-40f8-996b-ca212fedccc2/controller/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.283844 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.297499 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.333341 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.334210 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.560258 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.578408 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.584482 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.586373 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.787547 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-frr-files/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.820230 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-reloader/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.830451 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/cp-metrics/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.848252 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/controller/0.log" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.852083 4729 generic.go:334] "Generic (PLEG): container finished" podID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerID="07039756a2e293281367f4f152c94b4d030d0f334446954d986f014d3ed18b2b" exitCode=0 Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.852123 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerDied","Data":"07039756a2e293281367f4f152c94b4d030d0f334446954d986f014d3ed18b2b"} Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.852158 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fbmh8" event={"ID":"c72961d9-781b-4978-9851-bd0ac2e33aae","Type":"ContainerDied","Data":"74d9f03f6312a7dfb3f5ba439b4c6529d6ef12454958669fef847274d3501629"} Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.852197 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d9f03f6312a7dfb3f5ba439b4c6529d6ef12454958669fef847274d3501629" Jan 27 16:14:25 crc kubenswrapper[4729]: I0127 16:14:25.910824 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.069745 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities\") pod \"c72961d9-781b-4978-9851-bd0ac2e33aae\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.070126 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content\") pod \"c72961d9-781b-4978-9851-bd0ac2e33aae\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.070237 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6mxl\" (UniqueName: \"kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl\") pod \"c72961d9-781b-4978-9851-bd0ac2e33aae\" (UID: \"c72961d9-781b-4978-9851-bd0ac2e33aae\") " Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.070795 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities" (OuterVolumeSpecName: "utilities") pod "c72961d9-781b-4978-9851-bd0ac2e33aae" (UID: "c72961d9-781b-4978-9851-bd0ac2e33aae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.089330 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl" (OuterVolumeSpecName: "kube-api-access-r6mxl") pod "c72961d9-781b-4978-9851-bd0ac2e33aae" (UID: "c72961d9-781b-4978-9851-bd0ac2e33aae"). InnerVolumeSpecName "kube-api-access-r6mxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.089362 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/frr-metrics/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.094988 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/kube-rbac-proxy-frr/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.095455 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c72961d9-781b-4978-9851-bd0ac2e33aae" (UID: "c72961d9-781b-4978-9851-bd0ac2e33aae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.105234 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/kube-rbac-proxy/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.172958 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.172994 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6mxl\" (UniqueName: \"kubernetes.io/projected/c72961d9-781b-4978-9851-bd0ac2e33aae-kube-api-access-r6mxl\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.173009 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c72961d9-781b-4978-9851-bd0ac2e33aae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.335657 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/reloader/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.412612 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-f7nvd_cdef1951-7494-4265-9e1c-9098dab9c112/frr-k8s-webhook-server/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.673447 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58cc84db45-5jsdn_816e93a1-24ab-4c0b-acd4-439e95ae655d/manager/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.842602 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b46645688-26b9b_07916c16-27c4-4035-855c-f5ca61af09df/webhook-server/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.863825 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fbmh8" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.912940 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.918163 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k2h86_2069295e-9cb7-458a-b4f6-4f569b6e6a8e/kube-rbac-proxy/0.log" Jan 27 16:14:26 crc kubenswrapper[4729]: I0127 16:14:26.924171 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fbmh8"] Jan 27 16:14:27 crc kubenswrapper[4729]: I0127 16:14:27.862530 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-k2h86_2069295e-9cb7-458a-b4f6-4f569b6e6a8e/speaker/0.log" Jan 27 16:14:28 crc kubenswrapper[4729]: I0127 16:14:28.099006 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" path="/var/lib/kubelet/pods/c72961d9-781b-4978-9851-bd0ac2e33aae/volumes" Jan 27 16:14:28 crc kubenswrapper[4729]: I0127 16:14:28.297946 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ml79v_0015219d-ee39-40ea-896a-944b4b45e46b/frr/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.262005 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.490907 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.510465 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.522473 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.682812 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/util/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.728189 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/pull/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.732984 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a25f8n6_60fb67db-16cd-4ee3-a6d5-68b8be36ace9/extract/0.log" Jan 27 16:14:41 crc kubenswrapper[4729]: I0127 16:14:41.886320 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.075218 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.086621 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.105373 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.303897 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.324323 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/pull/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.349063 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckbwtx_6f5c81c1-80ca-48a9-b260-f7ed1e0a45d4/extract/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.494983 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.690246 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.724779 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.728730 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.910297 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/util/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.940120 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/extract/0.log" Jan 27 16:14:42 crc kubenswrapper[4729]: I0127 16:14:42.976215 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bmj2qm_a6125a70-d6bd-465f-85a9-6a39034b628b/pull/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.150005 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.336185 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.366572 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.368112 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.549270 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/extract/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.555458 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/util/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.602520 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bjvf5_293cacf8-dc6b-4065-b80c-5020312fc92a/pull/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.761696 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.986113 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:14:43 crc kubenswrapper[4729]: I0127 16:14:43.993614 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.005752 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.174350 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/util/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.180748 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/pull/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.272645 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f085zg59_cdb1f7e2-974e-4c1b-9bdc-352e4d38af6c/extract/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.403405 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.560164 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.585980 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.604395 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.802121 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-utilities/0.log" Jan 27 16:14:44 crc kubenswrapper[4729]: I0127 16:14:44.903856 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/extract-content/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.051486 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.629068 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-q5jzg_9ec2206e-db3e-4a0f-b63c-a9a233e7a1fb/registry-server/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.650125 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.697379 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.706859 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.823773 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-content/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.865542 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/extract-utilities/0.log" Jan 27 16:14:45 crc kubenswrapper[4729]: I0127 16:14:45.936011 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c26vz_cf7bbeaf-d788-4a89-94f5-af01034515c5/marketplace-operator/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.165370 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.343376 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.405155 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.445307 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.681686 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-utilities/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.693858 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/extract-content/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.911405 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-86mvc_3934c5e8-bbe9-4ce9-84da-61ee1f3e968a/registry-server/0.log" Jan 27 16:14:46 crc kubenswrapper[4729]: I0127 16:14:46.925577 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.079509 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wlhr9_f11add35-16a1-4182-92bb-55f9144ffe2a/registry-server/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.114115 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.142523 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.146647 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.373211 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-utilities/0.log" Jan 27 16:14:47 crc kubenswrapper[4729]: I0127 16:14:47.382583 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/extract-content/0.log" Jan 27 16:14:48 crc kubenswrapper[4729]: I0127 16:14:48.301369 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-v9nd4_fd6bbce9-632e-4493-9867-9859ee8a4aeb/registry-server/0.log" Jan 27 16:14:52 crc kubenswrapper[4729]: I0127 16:14:52.655534 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:14:52 crc kubenswrapper[4729]: I0127 16:14:52.655972 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:14:52 crc kubenswrapper[4729]: I0127 16:14:52.656016 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 16:14:52 crc kubenswrapper[4729]: I0127 16:14:52.657622 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:14:52 crc kubenswrapper[4729]: I0127 16:14:52.657688 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757" gracePeriod=600 Jan 27 16:14:53 crc kubenswrapper[4729]: I0127 16:14:53.172390 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757" exitCode=0 Jan 27 16:14:53 crc kubenswrapper[4729]: I0127 16:14:53.173065 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757"} Jan 27 16:14:53 crc kubenswrapper[4729]: I0127 16:14:53.173094 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerStarted","Data":"5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32"} Jan 27 16:14:53 crc kubenswrapper[4729]: I0127 16:14:53.173117 4729 scope.go:117] "RemoveContainer" containerID="a05ed6f7b5addf36abf9a7000533174bd17748ad30c2d0a27758e0ee11e00d36" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.330045 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6"] Jan 27 16:15:00 crc kubenswrapper[4729]: E0127 16:15:00.331271 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.331290 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4729]: E0127 16:15:00.331315 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="extract-utilities" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.331323 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="extract-utilities" Jan 27 16:15:00 crc kubenswrapper[4729]: E0127 16:15:00.331333 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="extract-content" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.331341 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="extract-content" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.331670 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72961d9-781b-4978-9851-bd0ac2e33aae" containerName="registry-server" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.340566 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.359705 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6"] Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.374024 4729 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.374944 4729 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.389682 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9d7t\" (UniqueName: \"kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.389893 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.389953 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.492044 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9d7t\" (UniqueName: \"kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.492286 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.492364 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.495297 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.517789 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-k96w6_009d21ee-b5c2-4d71-8a58-fc2643442532/prometheus-operator-admission-webhook/0.log" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.522253 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.522756 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9d7t\" (UniqueName: \"kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t\") pod \"collect-profiles-29492175-9x4x6\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.525048 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6jn5b_e5a4281d-dad0-47ba-b48c-cb8a18c57552/prometheus-operator/0.log" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.581041 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-77f88549bc-zq5cv_64ad3df0-d3a7-446f-a7d9-6c4194d92071/prometheus-operator-admission-webhook/0.log" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.689577 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.789195 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-gcmgr_5b2e021c-d93d-45b1-81be-040aa9ab8ada/operator/0.log" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.821171 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-m2dsv_be65005b-48eb-45fe-b1e7-f5b5416fd8f3/observability-ui-dashboards/0.log" Jan 27 16:15:00 crc kubenswrapper[4729]: I0127 16:15:00.860646 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-p5mb2_a67b9ac3-c13b-4a42-b4e8-35a0bae5aa31/perses-operator/0.log" Jan 27 16:15:01 crc kubenswrapper[4729]: I0127 16:15:01.246773 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6"] Jan 27 16:15:02 crc kubenswrapper[4729]: I0127 16:15:02.269803 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" event={"ID":"17b6336a-b02e-41a9-8a6f-11a563051b91","Type":"ContainerStarted","Data":"de1b39fcb97f7d3d506270a34a007599f2b5ad2cf59a8c5b3d3420fb9bd2d528"} Jan 27 16:15:02 crc kubenswrapper[4729]: I0127 16:15:02.270154 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" event={"ID":"17b6336a-b02e-41a9-8a6f-11a563051b91","Type":"ContainerStarted","Data":"cc102dec2da1e0d76120f14c54e6b3a6ec92b59fe9b22335423509163f8c9ab7"} Jan 27 16:15:02 crc kubenswrapper[4729]: I0127 16:15:02.300564 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" podStartSLOduration=2.30051281 podStartE2EDuration="2.30051281s" podCreationTimestamp="2026-01-27 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 16:15:02.290801009 +0000 UTC m=+7788.874992013" watchObservedRunningTime="2026-01-27 16:15:02.30051281 +0000 UTC m=+7788.884703824" Jan 27 16:15:03 crc kubenswrapper[4729]: I0127 16:15:03.282676 4729 generic.go:334] "Generic (PLEG): container finished" podID="17b6336a-b02e-41a9-8a6f-11a563051b91" containerID="de1b39fcb97f7d3d506270a34a007599f2b5ad2cf59a8c5b3d3420fb9bd2d528" exitCode=0 Jan 27 16:15:03 crc kubenswrapper[4729]: I0127 16:15:03.283304 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" event={"ID":"17b6336a-b02e-41a9-8a6f-11a563051b91","Type":"ContainerDied","Data":"de1b39fcb97f7d3d506270a34a007599f2b5ad2cf59a8c5b3d3420fb9bd2d528"} Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.427606 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.544239 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9d7t\" (UniqueName: \"kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t\") pod \"17b6336a-b02e-41a9-8a6f-11a563051b91\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.544391 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume\") pod \"17b6336a-b02e-41a9-8a6f-11a563051b91\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.544610 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume\") pod \"17b6336a-b02e-41a9-8a6f-11a563051b91\" (UID: \"17b6336a-b02e-41a9-8a6f-11a563051b91\") " Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.546641 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume" (OuterVolumeSpecName: "config-volume") pod "17b6336a-b02e-41a9-8a6f-11a563051b91" (UID: "17b6336a-b02e-41a9-8a6f-11a563051b91"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.557827 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17b6336a-b02e-41a9-8a6f-11a563051b91" (UID: "17b6336a-b02e-41a9-8a6f-11a563051b91"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.558296 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t" (OuterVolumeSpecName: "kube-api-access-q9d7t") pod "17b6336a-b02e-41a9-8a6f-11a563051b91" (UID: "17b6336a-b02e-41a9-8a6f-11a563051b91"). InnerVolumeSpecName "kube-api-access-q9d7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.647895 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9d7t\" (UniqueName: \"kubernetes.io/projected/17b6336a-b02e-41a9-8a6f-11a563051b91-kube-api-access-q9d7t\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.647940 4729 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17b6336a-b02e-41a9-8a6f-11a563051b91-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:05 crc kubenswrapper[4729]: I0127 16:15:05.647951 4729 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17b6336a-b02e-41a9-8a6f-11a563051b91-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 16:15:06 crc kubenswrapper[4729]: I0127 16:15:06.327094 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" event={"ID":"17b6336a-b02e-41a9-8a6f-11a563051b91","Type":"ContainerDied","Data":"cc102dec2da1e0d76120f14c54e6b3a6ec92b59fe9b22335423509163f8c9ab7"} Jan 27 16:15:06 crc kubenswrapper[4729]: I0127 16:15:06.327410 4729 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc102dec2da1e0d76120f14c54e6b3a6ec92b59fe9b22335423509163f8c9ab7" Jan 27 16:15:06 crc kubenswrapper[4729]: I0127 16:15:06.327262 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29492175-9x4x6" Jan 27 16:15:06 crc kubenswrapper[4729]: I0127 16:15:06.532585 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg"] Jan 27 16:15:06 crc kubenswrapper[4729]: I0127 16:15:06.545309 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29492130-svkrg"] Jan 27 16:15:08 crc kubenswrapper[4729]: I0127 16:15:08.066646 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a01eba2a-349a-4cda-99c3-2b658358a3ab" path="/var/lib/kubelet/pods/a01eba2a-349a-4cda-99c3-2b658358a3ab/volumes" Jan 27 16:15:15 crc kubenswrapper[4729]: I0127 16:15:15.464855 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/kube-rbac-proxy/0.log" Jan 27 16:15:15 crc kubenswrapper[4729]: I0127 16:15:15.532068 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5975c77b68-sdbrg_cf09e55d-e675-4bbe-aca3-853b9bc46cbc/manager/0.log" Jan 27 16:15:17 crc kubenswrapper[4729]: I0127 16:15:17.881751 4729 scope.go:117] "RemoveContainer" containerID="b9e521344a5aa551ee7c166a7e7065b90565bb873e08760b41e4cefb70eda65e" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.448551 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:15:40 crc kubenswrapper[4729]: E0127 16:15:40.449545 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b6336a-b02e-41a9-8a6f-11a563051b91" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.449582 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b6336a-b02e-41a9-8a6f-11a563051b91" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.449807 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b6336a-b02e-41a9-8a6f-11a563051b91" containerName="collect-profiles" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.451658 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.492491 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.492809 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzx7\" (UniqueName: \"kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.492974 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.527258 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.595859 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.596008 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzx7\" (UniqueName: \"kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.596157 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.656897 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.658228 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.719215 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzx7\" (UniqueName: \"kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7\") pod \"certified-operators-s6fzp\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: I0127 16:15:40.780976 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:40 crc kubenswrapper[4729]: E0127 16:15:40.807814 4729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.171:59524->38.129.56.171:42429: write tcp 38.129.56.171:59524->38.129.56.171:42429: write: broken pipe Jan 27 16:15:41 crc kubenswrapper[4729]: I0127 16:15:41.849039 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:15:42 crc kubenswrapper[4729]: I0127 16:15:42.811475 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerID="c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41" exitCode=0 Jan 27 16:15:42 crc kubenswrapper[4729]: I0127 16:15:42.812034 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerDied","Data":"c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41"} Jan 27 16:15:42 crc kubenswrapper[4729]: I0127 16:15:42.812070 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerStarted","Data":"e9e65a1b96b83ca7bd51d0c9a440a69b8efce4a1ec2fa208db7391e2a67d06dc"} Jan 27 16:15:44 crc kubenswrapper[4729]: I0127 16:15:44.837041 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerStarted","Data":"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809"} Jan 27 16:15:47 crc kubenswrapper[4729]: I0127 16:15:47.869492 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerID="36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809" exitCode=0 Jan 27 16:15:47 crc kubenswrapper[4729]: I0127 16:15:47.869565 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerDied","Data":"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809"} Jan 27 16:15:48 crc kubenswrapper[4729]: I0127 16:15:48.886547 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerStarted","Data":"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2"} Jan 27 16:15:48 crc kubenswrapper[4729]: I0127 16:15:48.916588 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s6fzp" podStartSLOduration=3.41886873 podStartE2EDuration="8.916567062s" podCreationTimestamp="2026-01-27 16:15:40 +0000 UTC" firstStartedPulling="2026-01-27 16:15:42.815575675 +0000 UTC m=+7829.399766679" lastFinishedPulling="2026-01-27 16:15:48.313274007 +0000 UTC m=+7834.897465011" observedRunningTime="2026-01-27 16:15:48.903383188 +0000 UTC m=+7835.487574212" watchObservedRunningTime="2026-01-27 16:15:48.916567062 +0000 UTC m=+7835.500758076" Jan 27 16:15:50 crc kubenswrapper[4729]: I0127 16:15:50.784126 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:50 crc kubenswrapper[4729]: I0127 16:15:50.784552 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:15:51 crc kubenswrapper[4729]: I0127 16:15:51.849406 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s6fzp" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" probeResult="failure" output=< Jan 27 16:15:51 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:15:51 crc kubenswrapper[4729]: > Jan 27 16:16:01 crc kubenswrapper[4729]: I0127 16:16:01.860468 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-s6fzp" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" probeResult="failure" output=< Jan 27 16:16:01 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:16:01 crc kubenswrapper[4729]: > Jan 27 16:16:10 crc kubenswrapper[4729]: I0127 16:16:10.860400 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:16:10 crc kubenswrapper[4729]: I0127 16:16:10.914729 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:16:11 crc kubenswrapper[4729]: I0127 16:16:11.662742 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:16:12 crc kubenswrapper[4729]: I0127 16:16:12.148929 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s6fzp" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" containerID="cri-o://ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2" gracePeriod=2 Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.067693 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.159325 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities\") pod \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.160374 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content\") pod \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.160454 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities" (OuterVolumeSpecName: "utilities") pod "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" (UID: "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.160601 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzx7\" (UniqueName: \"kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7\") pod \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\" (UID: \"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1\") " Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.161737 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.167385 4729 generic.go:334] "Generic (PLEG): container finished" podID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerID="ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2" exitCode=0 Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.167425 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerDied","Data":"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2"} Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.167450 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6fzp" event={"ID":"fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1","Type":"ContainerDied","Data":"e9e65a1b96b83ca7bd51d0c9a440a69b8efce4a1ec2fa208db7391e2a67d06dc"} Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.167469 4729 scope.go:117] "RemoveContainer" containerID="ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.167653 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6fzp" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.222110 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7" (OuterVolumeSpecName: "kube-api-access-9qzx7") pod "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" (UID: "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1"). InnerVolumeSpecName "kube-api-access-9qzx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.227001 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" (UID: "fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.229325 4729 scope.go:117] "RemoveContainer" containerID="36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.263748 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzx7\" (UniqueName: \"kubernetes.io/projected/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-kube-api-access-9qzx7\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.263793 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.301990 4729 scope.go:117] "RemoveContainer" containerID="c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.362890 4729 scope.go:117] "RemoveContainer" containerID="ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2" Jan 27 16:16:13 crc kubenswrapper[4729]: E0127 16:16:13.363356 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2\": container with ID starting with ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2 not found: ID does not exist" containerID="ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.363441 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2"} err="failed to get container status \"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2\": rpc error: code = NotFound desc = could not find container \"ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2\": container with ID starting with ef762267f21cc179b3342c4bb121c925daf6d4aed6b166a39583b5500eab8cf2 not found: ID does not exist" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.363512 4729 scope.go:117] "RemoveContainer" containerID="36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809" Jan 27 16:16:13 crc kubenswrapper[4729]: E0127 16:16:13.363801 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809\": container with ID starting with 36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809 not found: ID does not exist" containerID="36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.363969 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809"} err="failed to get container status \"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809\": rpc error: code = NotFound desc = could not find container \"36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809\": container with ID starting with 36ca6e69e32060efd1b5ae5e0c3b5e048c0920c95533bc1fae43534df10e0809 not found: ID does not exist" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.363986 4729 scope.go:117] "RemoveContainer" containerID="c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41" Jan 27 16:16:13 crc kubenswrapper[4729]: E0127 16:16:13.364189 4729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41\": container with ID starting with c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41 not found: ID does not exist" containerID="c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.364206 4729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41"} err="failed to get container status \"c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41\": rpc error: code = NotFound desc = could not find container \"c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41\": container with ID starting with c827001a7aff06a06da2151ff0715bf6338804123aff83e05b3f2ced32a70a41 not found: ID does not exist" Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.534315 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:16:13 crc kubenswrapper[4729]: I0127 16:16:13.546463 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s6fzp"] Jan 27 16:16:14 crc kubenswrapper[4729]: I0127 16:16:14.064998 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" path="/var/lib/kubelet/pods/fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1/volumes" Jan 27 16:16:18 crc kubenswrapper[4729]: I0127 16:16:18.270569 4729 scope.go:117] "RemoveContainer" containerID="7bc33b7c2b1946e110059cbe4954218f9184f3ca753a627372885a5ab41bc662" Jan 27 16:17:18 crc kubenswrapper[4729]: I0127 16:17:18.604972 4729 scope.go:117] "RemoveContainer" containerID="d3d97bb00e37c65c14d9a80c44eebf046ef8fa941197e017090ecd5b7cad025f" Jan 27 16:17:22 crc kubenswrapper[4729]: I0127 16:17:22.655249 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:17:22 crc kubenswrapper[4729]: I0127 16:17:22.655811 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.194212 4729 generic.go:334] "Generic (PLEG): container finished" podID="11e580c6-f2da-429f-894a-8d32d7ad242e" containerID="f1cf291732ad48da49276807961aa9bc090584454dc0d2853e20bf76b4bef726" exitCode=0 Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.194295 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mjwht/must-gather-67bkt" event={"ID":"11e580c6-f2da-429f-894a-8d32d7ad242e","Type":"ContainerDied","Data":"f1cf291732ad48da49276807961aa9bc090584454dc0d2853e20bf76b4bef726"} Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.195733 4729 scope.go:117] "RemoveContainer" containerID="f1cf291732ad48da49276807961aa9bc090584454dc0d2853e20bf76b4bef726" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.510268 4729 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:17:44 crc kubenswrapper[4729]: E0127 16:17:44.510969 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="extract-content" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.511008 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="extract-content" Jan 27 16:17:44 crc kubenswrapper[4729]: E0127 16:17:44.511041 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="extract-utilities" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.511050 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="extract-utilities" Jan 27 16:17:44 crc kubenswrapper[4729]: E0127 16:17:44.511082 4729 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.511091 4729 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.511356 4729 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbaecc1b-2bf4-4ffb-bca5-e88bcdede2c1" containerName="registry-server" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.513508 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.525628 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.620437 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26cx7\" (UniqueName: \"kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.620605 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.620639 4729 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.724149 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26cx7\" (UniqueName: \"kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.724379 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.724411 4729 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.725004 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.725125 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.755543 4729 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26cx7\" (UniqueName: \"kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7\") pod \"community-operators-8zn4c\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:44 crc kubenswrapper[4729]: I0127 16:17:44.837343 4729 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:17:45 crc kubenswrapper[4729]: I0127 16:17:45.172455 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mjwht_must-gather-67bkt_11e580c6-f2da-429f-894a-8d32d7ad242e/gather/0.log" Jan 27 16:17:45 crc kubenswrapper[4729]: I0127 16:17:45.989555 4729 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:17:46 crc kubenswrapper[4729]: I0127 16:17:46.214992 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerStarted","Data":"0aa719ecbcd38d0d5f6817d4fe05997f5187e4a4607e10295841bde798342f85"} Jan 27 16:17:47 crc kubenswrapper[4729]: I0127 16:17:47.226601 4729 generic.go:334] "Generic (PLEG): container finished" podID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerID="1764a9e2758ed4ee5891d7645b8b5419cc16cae08fc9900d54d8ccd0ea0904e3" exitCode=0 Jan 27 16:17:47 crc kubenswrapper[4729]: I0127 16:17:47.226747 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerDied","Data":"1764a9e2758ed4ee5891d7645b8b5419cc16cae08fc9900d54d8ccd0ea0904e3"} Jan 27 16:17:50 crc kubenswrapper[4729]: I0127 16:17:50.268244 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerStarted","Data":"7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda"} Jan 27 16:17:52 crc kubenswrapper[4729]: I0127 16:17:52.655257 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:17:52 crc kubenswrapper[4729]: I0127 16:17:52.655570 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:17:54 crc kubenswrapper[4729]: E0127 16:17:54.156690 4729 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f4f8744_707b_4891_b6cd_c246dcf26ccc.slice/crio-7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f4f8744_707b_4891_b6cd_c246dcf26ccc.slice/crio-conmon-7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda.scope\": RecentStats: unable to find data in memory cache]" Jan 27 16:17:54 crc kubenswrapper[4729]: I0127 16:17:54.313868 4729 generic.go:334] "Generic (PLEG): container finished" podID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerID="7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda" exitCode=0 Jan 27 16:17:54 crc kubenswrapper[4729]: I0127 16:17:54.313930 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerDied","Data":"7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda"} Jan 27 16:17:55 crc kubenswrapper[4729]: I0127 16:17:55.326375 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerStarted","Data":"b20659c9e45855e94c1969083fca6a25220e92af379c9d27c302aead68fed7d2"} Jan 27 16:17:55 crc kubenswrapper[4729]: I0127 16:17:55.355540 4729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8zn4c" podStartSLOduration=3.660415016 podStartE2EDuration="11.355520092s" podCreationTimestamp="2026-01-27 16:17:44 +0000 UTC" firstStartedPulling="2026-01-27 16:17:47.229085296 +0000 UTC m=+7953.813276300" lastFinishedPulling="2026-01-27 16:17:54.924190372 +0000 UTC m=+7961.508381376" observedRunningTime="2026-01-27 16:17:55.344309501 +0000 UTC m=+7961.928500505" watchObservedRunningTime="2026-01-27 16:17:55.355520092 +0000 UTC m=+7961.939711096" Jan 27 16:18:04 crc kubenswrapper[4729]: I0127 16:18:04.838320 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:04 crc kubenswrapper[4729]: I0127 16:18:04.838997 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:05 crc kubenswrapper[4729]: I0127 16:18:05.929828 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8zn4c" podUID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerName="registry-server" probeResult="failure" output=< Jan 27 16:18:05 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:18:05 crc kubenswrapper[4729]: > Jan 27 16:18:09 crc kubenswrapper[4729]: I0127 16:18:09.377922 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mjwht/must-gather-67bkt"] Jan 27 16:18:09 crc kubenswrapper[4729]: I0127 16:18:09.381447 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mjwht/must-gather-67bkt" podUID="11e580c6-f2da-429f-894a-8d32d7ad242e" containerName="copy" containerID="cri-o://7f3b8f08bdf831793a6597fbc77e9124fa6ad5802c485c33092e376f7b4d94f9" gracePeriod=2 Jan 27 16:18:09 crc kubenswrapper[4729]: I0127 16:18:09.390917 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mjwht/must-gather-67bkt"] Jan 27 16:18:09 crc kubenswrapper[4729]: I0127 16:18:09.660743 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mjwht_must-gather-67bkt_11e580c6-f2da-429f-894a-8d32d7ad242e/copy/0.log" Jan 27 16:18:09 crc kubenswrapper[4729]: I0127 16:18:09.661738 4729 generic.go:334] "Generic (PLEG): container finished" podID="11e580c6-f2da-429f-894a-8d32d7ad242e" containerID="7f3b8f08bdf831793a6597fbc77e9124fa6ad5802c485c33092e376f7b4d94f9" exitCode=143 Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.271524 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mjwht_must-gather-67bkt_11e580c6-f2da-429f-894a-8d32d7ad242e/copy/0.log" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.272125 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.282293 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9c7r\" (UniqueName: \"kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r\") pod \"11e580c6-f2da-429f-894a-8d32d7ad242e\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.282587 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output\") pod \"11e580c6-f2da-429f-894a-8d32d7ad242e\" (UID: \"11e580c6-f2da-429f-894a-8d32d7ad242e\") " Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.308805 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r" (OuterVolumeSpecName: "kube-api-access-r9c7r") pod "11e580c6-f2da-429f-894a-8d32d7ad242e" (UID: "11e580c6-f2da-429f-894a-8d32d7ad242e"). InnerVolumeSpecName "kube-api-access-r9c7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.385374 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9c7r\" (UniqueName: \"kubernetes.io/projected/11e580c6-f2da-429f-894a-8d32d7ad242e-kube-api-access-r9c7r\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.580163 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "11e580c6-f2da-429f-894a-8d32d7ad242e" (UID: "11e580c6-f2da-429f-894a-8d32d7ad242e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.591210 4729 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11e580c6-f2da-429f-894a-8d32d7ad242e-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.674815 4729 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mjwht_must-gather-67bkt_11e580c6-f2da-429f-894a-8d32d7ad242e/copy/0.log" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.675293 4729 scope.go:117] "RemoveContainer" containerID="7f3b8f08bdf831793a6597fbc77e9124fa6ad5802c485c33092e376f7b4d94f9" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.675366 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mjwht/must-gather-67bkt" Jan 27 16:18:10 crc kubenswrapper[4729]: I0127 16:18:10.712514 4729 scope.go:117] "RemoveContainer" containerID="f1cf291732ad48da49276807961aa9bc090584454dc0d2853e20bf76b4bef726" Jan 27 16:18:12 crc kubenswrapper[4729]: I0127 16:18:12.070089 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11e580c6-f2da-429f-894a-8d32d7ad242e" path="/var/lib/kubelet/pods/11e580c6-f2da-429f-894a-8d32d7ad242e/volumes" Jan 27 16:18:15 crc kubenswrapper[4729]: I0127 16:18:15.921168 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8zn4c" podUID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerName="registry-server" probeResult="failure" output=< Jan 27 16:18:15 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:18:15 crc kubenswrapper[4729]: > Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.655644 4729 patch_prober.go:28] interesting pod/machine-config-daemon-khqcl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.656332 4729 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.656386 4729 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.657979 4729 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32"} pod="openshift-machine-config-operator/machine-config-daemon-khqcl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.658060 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerName="machine-config-daemon" containerID="cri-o://5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" gracePeriod=600 Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.819224 4729 generic.go:334] "Generic (PLEG): container finished" podID="8919c7c3-b36c-4bf1-8aed-355b818721a4" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" exitCode=0 Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.819280 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" event={"ID":"8919c7c3-b36c-4bf1-8aed-355b818721a4","Type":"ContainerDied","Data":"5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32"} Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.819316 4729 scope.go:117] "RemoveContainer" containerID="f7509ca13830b3e0dcb10ef0c4eb35f32aa621b86eabdac6413e3c6572129757" Jan 27 16:18:22 crc kubenswrapper[4729]: E0127 16:18:22.824189 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:18:22 crc kubenswrapper[4729]: I0127 16:18:22.825364 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:18:22 crc kubenswrapper[4729]: E0127 16:18:22.825747 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:18:26 crc kubenswrapper[4729]: I0127 16:18:26.016838 4729 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8zn4c" podUID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerName="registry-server" probeResult="failure" output=< Jan 27 16:18:26 crc kubenswrapper[4729]: timeout: failed to connect service ":50051" within 1s Jan 27 16:18:26 crc kubenswrapper[4729]: > Jan 27 16:18:34 crc kubenswrapper[4729]: I0127 16:18:34.059682 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:18:34 crc kubenswrapper[4729]: E0127 16:18:34.061399 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:18:34 crc kubenswrapper[4729]: I0127 16:18:34.910076 4729 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:34 crc kubenswrapper[4729]: I0127 16:18:34.968145 4729 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:35 crc kubenswrapper[4729]: I0127 16:18:35.150439 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:18:35 crc kubenswrapper[4729]: I0127 16:18:35.973696 4729 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8zn4c" podUID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerName="registry-server" containerID="cri-o://b20659c9e45855e94c1969083fca6a25220e92af379c9d27c302aead68fed7d2" gracePeriod=2 Jan 27 16:18:36 crc kubenswrapper[4729]: I0127 16:18:36.988811 4729 generic.go:334] "Generic (PLEG): container finished" podID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" containerID="b20659c9e45855e94c1969083fca6a25220e92af379c9d27c302aead68fed7d2" exitCode=0 Jan 27 16:18:36 crc kubenswrapper[4729]: I0127 16:18:36.988955 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerDied","Data":"b20659c9e45855e94c1969083fca6a25220e92af379c9d27c302aead68fed7d2"} Jan 27 16:18:37 crc kubenswrapper[4729]: I0127 16:18:37.832138 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:37 crc kubenswrapper[4729]: I0127 16:18:37.981066 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content\") pod \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " Jan 27 16:18:37 crc kubenswrapper[4729]: I0127 16:18:37.981177 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities\") pod \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " Jan 27 16:18:37 crc kubenswrapper[4729]: I0127 16:18:37.981366 4729 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26cx7\" (UniqueName: \"kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7\") pod \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\" (UID: \"1f4f8744-707b-4891-b6cd-c246dcf26ccc\") " Jan 27 16:18:37 crc kubenswrapper[4729]: I0127 16:18:37.981960 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities" (OuterVolumeSpecName: "utilities") pod "1f4f8744-707b-4891-b6cd-c246dcf26ccc" (UID: "1f4f8744-707b-4891-b6cd-c246dcf26ccc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.001623 4729 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8zn4c" event={"ID":"1f4f8744-707b-4891-b6cd-c246dcf26ccc","Type":"ContainerDied","Data":"0aa719ecbcd38d0d5f6817d4fe05997f5187e4a4607e10295841bde798342f85"} Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.001681 4729 scope.go:117] "RemoveContainer" containerID="b20659c9e45855e94c1969083fca6a25220e92af379c9d27c302aead68fed7d2" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.001702 4729 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8zn4c" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.043639 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7" (OuterVolumeSpecName: "kube-api-access-26cx7") pod "1f4f8744-707b-4891-b6cd-c246dcf26ccc" (UID: "1f4f8744-707b-4891-b6cd-c246dcf26ccc"). InnerVolumeSpecName "kube-api-access-26cx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.053259 4729 scope.go:117] "RemoveContainer" containerID="7e8db7e25102021afd1272865574863af31fe7f9a0fd784ca12db34d54fd9eda" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.084253 4729 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.084287 4729 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26cx7\" (UniqueName: \"kubernetes.io/projected/1f4f8744-707b-4891-b6cd-c246dcf26ccc-kube-api-access-26cx7\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.085561 4729 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f4f8744-707b-4891-b6cd-c246dcf26ccc" (UID: "1f4f8744-707b-4891-b6cd-c246dcf26ccc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.105505 4729 scope.go:117] "RemoveContainer" containerID="1764a9e2758ed4ee5891d7645b8b5419cc16cae08fc9900d54d8ccd0ea0904e3" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.187164 4729 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f4f8744-707b-4891-b6cd-c246dcf26ccc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.336717 4729 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:18:38 crc kubenswrapper[4729]: I0127 16:18:38.347794 4729 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8zn4c"] Jan 27 16:18:40 crc kubenswrapper[4729]: I0127 16:18:40.066450 4729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4f8744-707b-4891-b6cd-c246dcf26ccc" path="/var/lib/kubelet/pods/1f4f8744-707b-4891-b6cd-c246dcf26ccc/volumes" Jan 27 16:18:45 crc kubenswrapper[4729]: I0127 16:18:45.052653 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:18:45 crc kubenswrapper[4729]: E0127 16:18:45.053849 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:18:59 crc kubenswrapper[4729]: I0127 16:18:59.051495 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:18:59 crc kubenswrapper[4729]: E0127 16:18:59.053565 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:19:11 crc kubenswrapper[4729]: I0127 16:19:11.050828 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:19:11 crc kubenswrapper[4729]: E0127 16:19:11.051645 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:19:25 crc kubenswrapper[4729]: I0127 16:19:25.050827 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:19:25 crc kubenswrapper[4729]: E0127 16:19:25.051660 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:19:36 crc kubenswrapper[4729]: I0127 16:19:36.051218 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:19:36 crc kubenswrapper[4729]: E0127 16:19:36.052227 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:19:51 crc kubenswrapper[4729]: I0127 16:19:51.052066 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:19:51 crc kubenswrapper[4729]: E0127 16:19:51.053055 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:20:06 crc kubenswrapper[4729]: I0127 16:20:06.051133 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:20:06 crc kubenswrapper[4729]: E0127 16:20:06.053194 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:20:17 crc kubenswrapper[4729]: I0127 16:20:17.052311 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:20:17 crc kubenswrapper[4729]: E0127 16:20:17.053083 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:20:18 crc kubenswrapper[4729]: I0127 16:20:18.965147 4729 scope.go:117] "RemoveContainer" containerID="da84de481564f803f69cc1667f124c38a6a830d63ebaa51e9eff769c90f07464" Jan 27 16:20:19 crc kubenswrapper[4729]: I0127 16:20:19.013748 4729 scope.go:117] "RemoveContainer" containerID="1528e7d0dff0b9e26b72e0d915e14dbe8bffe5e5387f6994046cfda14c77465a" Jan 27 16:20:19 crc kubenswrapper[4729]: I0127 16:20:19.045806 4729 scope.go:117] "RemoveContainer" containerID="07039756a2e293281367f4f152c94b4d030d0f334446954d986f014d3ed18b2b" Jan 27 16:20:30 crc kubenswrapper[4729]: I0127 16:20:30.051262 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:20:30 crc kubenswrapper[4729]: E0127 16:20:30.052034 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:20:44 crc kubenswrapper[4729]: I0127 16:20:44.059707 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:20:44 crc kubenswrapper[4729]: E0127 16:20:44.060777 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:20:59 crc kubenswrapper[4729]: I0127 16:20:59.051028 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:20:59 crc kubenswrapper[4729]: E0127 16:20:59.051974 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:21:12 crc kubenswrapper[4729]: I0127 16:21:12.051431 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:21:12 crc kubenswrapper[4729]: E0127 16:21:12.053660 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:21:25 crc kubenswrapper[4729]: I0127 16:21:25.050985 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:21:25 crc kubenswrapper[4729]: E0127 16:21:25.052069 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:21:40 crc kubenswrapper[4729]: I0127 16:21:40.052793 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:21:40 crc kubenswrapper[4729]: E0127 16:21:40.056006 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:21:51 crc kubenswrapper[4729]: I0127 16:21:51.050313 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:21:51 crc kubenswrapper[4729]: E0127 16:21:51.051024 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4" Jan 27 16:22:05 crc kubenswrapper[4729]: I0127 16:22:05.051428 4729 scope.go:117] "RemoveContainer" containerID="5a48cf436de1a3bc2a8a985e8fe0411d368b47ce62410de79990ac0a29c2cd32" Jan 27 16:22:05 crc kubenswrapper[4729]: E0127 16:22:05.052173 4729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-khqcl_openshift-machine-config-operator(8919c7c3-b36c-4bf1-8aed-355b818721a4)\"" pod="openshift-machine-config-operator/machine-config-daemon-khqcl" podUID="8919c7c3-b36c-4bf1-8aed-355b818721a4"